Twenty technology firms pledged to safeguard their artificial intelligence (AI) systems from influencing elections, including those in the United States on Friday, February 16.
The agreement addresses concerns regarding deceptive AI-generated election content, which could potentially mislead the public and undermine the integrity of electoral processes.
While the agreement is voluntary and does not impose a total prohibition on AI content during elections, it outlines eight actions that participating companies pledge to implement this year.
The pledge has been signed by 20 prominent companies, including industry leaders such as Microsoft, Google, Adobe, Amazon, Anthropic, Arm, ElevenLabs, IBM, Inflection AI, LinkedIn, McAfee, Meta, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
All the participating companies have agreed to eight specific commitments including developing detection tools, assessing potential risks of their own AI models, addressing identified deceptive content, and fostering cross-industry cooperation.
Importantly, they pledge transparency and continued engagement with experts and the public to build awareness and media literacy, aiming to strengthen overall societal resilience against such manipulation tactics.
Furthermore, it acknowledges the slow response of global lawmakers to rapid advancements in generative AI, prompting the tech industry to take steps toward self-regulation.
AI can influence elections through tactics like targeted misinformation, deepfake videos, biased algorithms in social media, and voter suppression strategies.
By analyzing extensive data, AI can pinpoint susceptible groups and customize persuasive content to shape opinions or dissuade certain demographics from participating in the voting process.
Also Read: Nimble Joins Forces with Sei Network to Revolutionize Decentralized AI