Meta Platforms Inc. has taken down hundreds of Facebook accounts linked to covert influence campaigns from China, Israel, Iran, Russia, and other nations. Some of these campaigns employed artificial intelligence tools to create disinformation, as detailed in the company’s quarterly threat report.
Threat actors have been using artificial intelligence (AI) to create false texts, images, and videos on Meta, the parent company of Facebook, Instagram, and WhatsApp, in an attempt to sway users away from their legitimate content.
However, Meta stated in the study published on Wednesday that the company’s capacity to upend those networks remained unaffected by the deployment of generative AI.
Meta discovered disinformation campaigns, including a Chinese network sharing AI-generated images of a fake pro-Sikh movement and an Israeli network posting AI-generated comments praising Israel’s military. These networks were removed before gaining significant traction.
At a press event on Tuesday, David Agranovich, Meta’s policy director for threat disruption, stated, “Right now we’re not seeing gen AI being used in terribly sophisticated ways.” He said using AI to create profile photos or generate large amounts of spam hasn’t worked well yet.
However, Agranovich noted, “But we know that these networks are inherently adversarial. They’re going to keep evolving their tactics as their technology changes.”
President of worldwide affairs at Meta, Nick Clegg, has made a strong case for the necessity to identify and classify AI-generated content, particularly as the firm gets ready for the 2024 election season. This year, more than 30 nations will host international elections, including several that are heavily dependent on the company’s apps, such as the US, India, and Brazil.
Clegg underlined how urgently an industry standard for watermarking is needed. Meta is creating tools to identify and categorize photos created by AI systems from businesses such as Google and OpenAI.
They have started tagging certain photos with both visible and invisible tags. The revised policies at Meta now identify deceptive content produced by AI rather than deleting it. While Facebook and Instagram require advertisers to disclose AI use in social issues, elections, or political ads, the firm does not fact-check political advertisements from political figures.
Also Read: Meta Invests $30 Billion in NVIDIA GPUs for AI Training