OpenAI and ChatGPT’s Anti-Cheating Tool: Between Transparency and Economic Interests 🤖
Cliquez ici pour lire en français
Since the emergence of ChatGPT, many teachers have faced an unprecedented challenge: distinguishing authentic work from that generated by artificial intelligence. In response to this growing problem, OpenAI has developed a highly effective anti-cheating tool. However, despite its efficiency, the company is hesitant to deploy it. Here’s why.
The invisible watermark 👻
OpenAI, the company behind the famous ChatGPT chatbot, is facing a dilemma regarding the launch of a tool for detecting AI-generated texts. According to information from The Wall Street Journal, the company has developed technology capable of detecting texts produced by ChatGPT with 99.9% accuracy, but is hesitant to make it public.
This technology relies on inserting an invisible « watermark » into texts generated by ChatGPT. This marking could be easily identified by a specialized detection tool. It would thus allow distinguishing between AI-created content and human-written content.
The tool would meet a growing demand, especially in educational settings where teachers struggle to discern authentic assignments from those produced by AI. It could also be useful in other fields such as scientific research or publishing, where AI plagiarism is a problem. However, despite its potential, the tool remains unused.
Between ethics and business: OpenAI’s reservations 😵💫
Several reasons explain why OpenAI hasn’t launched this tool yet. Firstly, the company fears that implementing such a device might discourage a significant number of users. An internal survey reveals that nearly 30% of ChatGPT users might turn away from the service if a detection mechanism were put in place. For a company looking to increase its number of paying subscribers, this prospect is concerning.
Next, there are concerns about the impact of this tool on users whose native language isn’t English. For many, ChatGPT is a valuable tool for improving their English writing skills. An anti-cheating device could stigmatize their use of AI and penalize them disproportionately.
Another challenge is that the detection tool could be circumvented quite easily. Methods like translation via Google Translate, adding and removing emojis, or even paraphrasing by another generative model could make the watermark undetectable. This vulnerability poses a strategic dilemma for OpenAI. Making the tool too accessible risks allowing users to understand and bypass the marking.
Faced with these challenges, OpenAI is exploring other solutions, notably the use of cryptographically signed metadata. They could offer a more robust approach without the risk of false positives.
An Unresolved Debate 🤔
This situation highlights the tension between ethical considerations and commercial interests within OpenAI. Similar debates had notably led to the temporary dismissal of Sam Altman the previous year.
The launch of such a tool could have a considerable impact on the use of ChatGPT and more broadly on the perception of generative AI. On one hand, it could reassure educational institutions and employers about the authenticity of submitted work. On the other hand, it could slow down the adoption of these technologies due to fear of stigmatization or detection.
While the debate continues internally, some teachers have already implemented their own methods to detect AI use, illustrating the urgency of finding a balanced solution to this growing problem. OpenAI’s decision could thus influence the entire generative AI sector and how these tools will be integrated into our society in the future.
Have you already used ChatGPT? What do you think about this anti-cheating tool? Come discuss it in the comments.
Follow our news every day on WhatsApp directly in the « Updates » tab by subscribing to our channel by clicking here➡️TechGriot WhatsApp Channel Link 😉