ChatGPT’s parent company, OpenAI, has banned numerous accounts associated with Iran for spreading fake news, especially regarding the U. S. Presidential election. This comes at the backdrop of Trump’s campaign accusing Iran of hacking into its computer systems.
Recently, OpenAI stated that it has recognized and blocked several accounts belonging to the Iranian influence operation called “Storm-2035. ” These accounts were utilizing ChatGPT to produce and disseminate fake information on the US election, the Gaza conflict, and Israel’s Olympic involvement.
The company realized that these accounts were authoring long-form articles as well as short social media posts; however, most of the content generated minimal engagement. OpenAI evaluated this operation as a low-level threat according to the Brookings’ Breakout Scale, which estimates the outcomes of clandestine influence operations.
OpenAI’s research revealed that the operation targeted both liberal and Republican voters. Some accounts even copied comments of real users in an attempt to make their activities look more genuine. Still, the effectiveness of the misinformation campaign was not very significant.
This crackdown is evidence of OpenAI’s focus on AI safety and its adherence to ethical practices. The firm is increasing its efforts to identify and mitigate foreign influence in the political process, employing AI algorithms to identify threats. The timing of this announcement is rather curious as it occurred only a week after Trump’s campaign claimed to have been attacked by Iranian hackers.
OpenAI’s decisive action against Iran-linked accounts shows its commitment to safeguarding information integrity. By blocking these accounts, the company reinforces the need for vigilance in the face of foreign influence in critical political processes.
Also Read: OpenAI Delays Release of Tool to Detect AI-Generated Text