American artificial intelligence research organization, OpenAI has postponed releasing a new tool designed to detect text generated by its models due to internal concerns. The tool, described as “highly accurate,” was intended to identify content produced by ChatGPT and similar AI systems.
According to a Wall Street Journal report on August 4, OpenAI’s decision to delay the tool stems from worries about potential misuse and its impact on non-English speakers.Â
The company had previously announced in May that it was developing methods to trace AI-generated content. However, they have now updated their blog post to note that the release is on hold.
The tool was praised for its effectiveness in detecting tampered text and ensuring text provenance. Yet, OpenAI remains cautious, citing concerns that bad actors might find ways to bypass the detection methods.
The company also worries that the tool could negatively affect non-English speakers by discouraging the use of AI for writing due to potential issues with translating text from English.
OpenAI’s approach involves invisible watermarking and proprietary detection techniques. Despite the advancement, the company has not provided a new release date for the tool.
Currently, other AI detection products on the market claim to identify AI-generated content, but none have proven consistently accurate in peer-reviewed studies.
OpenAI’s tool would be the first of its kind, specifically designed for its models, making its delayed release a notable development in the AI field.
Also Read: Gemini 1.5 Pro Surpasses GPT-4o and Claude-3 in AI Benchmarks