Artificial Intelligence firm OpenAI recently disclosed to prevent several global campaigns exploiting its technology to manipulate public opinion. On May 30, OpenAI, founded by Sam Altman, announced the termination of accounts involved in covert influence operations.
In a statement, OpenAI revealed that “In the last three months, we have disrupted five covert IO that sought to use our models in support of deceptive activity across the internet.”
These bad actors employed AI to generate comments for articles, create social media profiles, and translate and proofread texts.
One such operation, dubbed “Spamouflage,” used OpenAI’s tools to research social media and produce multilingual content across platforms like X, Medium, and Blogspot, aiming to sway public opinion and influence political outcomes. The group also leveraged AI for debugging code and managing databases and websites.
Another campaign, known as “Bad Grammar,” targeted Ukraine, Moldova, the Baltic States, and the United States. This operation used OpenAI models to run Telegram bots and generate political outcomes. The group also leveraged AI for debugging code and managing databases and websites.
A third group, “Doppelganger,” utilized AI to generate comments in various languages, including English, French, German, Italian, and Polish. These comments were posted on platforms like X and 9GAG with the intent to manipulate public opinion.
A persistent Iranian threat actor known as the “International Union of Virtual Media” (IUVM) has been posting web content supporting Iran while criticizing Israel and the US.
Additionally, a commercial company in Israel called STOIC has been generating content about the Gaza conflict, the Histadrut trade unions organization in Israel, and the Indian elections. This operation, which we have nicknamed “Zero Zeno” after the founder of the Stoic school of philosophy, has attracted minimal engagement with its campaigns.
OpenAI stated, “The content posted by these various operations focused on a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments.”
Ben Nimmo, a principal investigator for OpenAI who wrote the report, told The New York Times, “Our case studies provide examples from some of the most widely reported and longest-running influence campaigns that are currently active.”
Despite these revelations, OpenAI noted that these operations did not seem to achieve significantly increased audience engagement or reach through their services.
Also Read: OpenAI Secures Licensing Deals with Vox Media, The Atlantic