Artificial Intelligence (AI) has undoubtedly changed the way we look at our world. Today, AI machines are performing surgeries, predicting the weather, running banks, assisting farmers, and writing bad poetry.
As per the latest reports, global IT spending is set to hit $5 trillion by the end of 2024 which is an 8% increase from the previous quarter. This 8% increase is largely due to investments in AI by companies and investors across the globe.
With the advent of AI, a new technology- Deepfake AI- has also emerged that has completely changed the way we form our sense of reality. The latest buzz in town-Deepfake-is already demonized by celebrities, governments, and any person who was born before 2000.
Deepfake has faced allegations of spreading misinformation on social media, interfering with electoral processes in democracies, defaming celebrities with morphed videos and images, and in general, creating an environment of mistrust and skepticism on the internet.
In this article, we break down the complex world of deepfake, the perils it holds for humankind, and the steps taken by companies and governments against its misuse.
What exactly is Deepfake AI?
Deepfake AI, a fusion of “deep learning” and “fake,” employs advanced techniques in artificial intelligence to create remarkably realistic fake or altered digital content, often in the form of videos, images, or audio recordings. Deepfake AI uses powerful algorithms to analyze and manipulate audio and video recordings.
Deepfake: Some good, many bad aspects
Deepfake videos have reached a level of sophistication where they can convincingly depict individuals saying or doing things they never did. Whether it’s altering facial expressions, voices, or entire scenarios, these videos pose a significant challenge to the authenticity of digital content.
This technology can be mind-blowing in some aspects – let’s say a new ‘Batman’ movie is coming up. The film has the character of ‘Joker’ who is played by legendary actor Heath Ledger. But then Ledger is already dead. It is a deepfake that allows swapping faces of actors characters and creating realistic animations.
How cool would that be!
However, in a world where reality can be manipulated with the click of a button, the rise of AI-powered deepfake technology has sparked concerns about defamation, manipulation, and copyright infringement of artists. Tomorrow, a software can pen new songs and sing them in Taylor Swift’s voice despite the artist not receiving a single penny in royalty.
Celebrity Concerns and Political Ramifications
Celebrities like Warren Buffett, Taylor Swift, Amitabh Bachchan, and Indian Prime Minister Narendra Modi among others have publicly expressed their concerns about deep fake technology.
Related: FKA Twigs Reveals her Deepfake Creation at Senate Hearing
Recently, speaking at a film festival at Symbiosis Institute in India, legendary actor Amitabh Bachchan emphasized the dangers of deep fake videos, urging people to be cautious about what they believe online saying, “One of the things that is of great concern is AI and a lot of people are objecting to the fact that all of us are now being subjected to face mapping. Our entire body is going to be face-mapped and will be kept aside and can be used at any point in time.”
He also added, “There will be a time when Symbiosis Institute will call my AI and not me personally.”
Similarly, Indian PM Narendra Modi claimed that attempts were being made to spread misinformation through deepfake in the ongoing 2024 general elections of India. Modi labeled this as India’s first AI election, mentioning that fake voices are being used to make leaders appear “statements that we have never even thought of,” calling it a conspiracy “to create tension in society.”
Recently, Deepfake videos depicting Taylor Swift supporting Trump and spreading election denialism are circulating widely on social media, particularly on Twitter, with millions of views. In another development, Warren Buffett compared AI to Nukes and termed it as a “genie partially out of the bottle.”
Buffett’s concerns echo those of JPMorgan Chase CEO Jamie Dimon, who sees AI’s transformative power but also its risks, such as cyberattacks, along with Michael Saylor.
Global Responses and Preparations
X (former Twitter)
Elon Musk, CEO of X, Tesla, and SpaceX, recently announced an initiative to combat deepfakes as well as “shallowfakes”- that shows manipulated media without the use of AI.
Additionally, X implemented a feature to display notes on images automatically, aiding in identifying matching posts. This development enhances transparency and helps users discern manipulated content across social media platforms.
Meta
Plans to label AI-generated content were announced by Meta, the parent company of Facebook and Instagram, in response to criticism from its oversight board. Content will receive “Made with AI” labels for transparency, starting in May 2024.
Labels will be added to high-risk content, with a full rollout by July. Removal of manipulated media will cease, though hate speech and election interference content will still be removed. Meta aims to balance expression freedom and platform integrity.
Governments and tech platforms around the world are also stepping up efforts to combat the spread of deep fake videos and the misinformation they propagate.
United States
In the United States, government agencies like the Department of Defense and the Department of Homeland Security are investing in research to develop tools for detecting and countering deep fakes.
The US Federal Trade Commission (FTC) has proposed laws that aim to prevent the use of deepfakes in fraud. They warn that advanced technology is allowing fraudsters to impersonate people more convincingly.
Also Read: Singapore PM Warns Against Deepfake Scams in Crypto Videos
European Union(EU)
The EU Digital Services Act’s (DSA) Proposed legislation aims to hold online platforms accountable for the content they host, including deepfakes. It requires platforms to put measures in place to detect and remove such content.
The EU, in collaboration with regulators across its 27 member states, announced plans to establish an “enforcement ecosystem” to communicate to all platforms that fake material will be deemed illegal under the Digital Services Act.
EU-funded projects like “SHERPA” and “WeVerify” focus on understanding and combating deepfakes. These projects involve interdisciplinary teams working on detection, mitigation, and policy aspects.
China
China has also implemented various measures to regulate AI and combat deepfake technology. The Cyberspace Administration of China (CAC) issued regulations requiring AI-driven content to be clearly labelled, aiming to curb misinformation and deepfakes.
South Korea
South Korea’s National Police Agency (KNPA) has developed a tool to detect AI-generated content amidst a surge in politically motivated deepfakes. While the world anticipates deepfakes in election seasons, South Korea has already faced this issue.
How Deep Are We in Deepfake?
The future of AI in social media is expansive, but concerns over deepfakes persist. While AI enhances content creation and personalization, deepfakes pose significant risks by allowing the manipulation of images and videos.
Pros of deepfake include advanced content generation and improved user experience, but cons involve the spread of misinformation and erosion of trust. Nowadays, AI permeates social media through recommendation algorithms, content moderation, and chatbots.
However, to mitigate deepfake risks, robust detection tools and policies are crucial. Ultimately, balancing AI’s benefits with ethical considerations is key to harnessing its full potential in social media.
The future undoubtedly belongs to AI but it is left to be seen how we as humans can effectively balance the benefits of AI while safeguarding our future against possible harms by deepfake.