After a long day of gunfight, a lone, exhausted Russian soldier is making his way out of a trench. The trench is filled with corpses of his comrades who had died fighting in the Russian-Ukrainian war. As he fumbles and runs over the bodies, a Ukrainian drone keeps hovering above him with a familiar, dreadful buzzing noise.
Like a predator who likes to play with its meal, the AI powered drone first chases the soldier and then drops grenades in the vicinity to give him flesh wounds. The injured soldier finally gives up and folds his hands in front of the drone, begging for mercy. The drone hovers for a few seconds, as if it’s making a decision, and then delivers the final payload on the soldier.
Welcome to 21st century warfare where an amalgamation of machine learning, artificial intelligence and high tech equipment is wreaking havoc from Ukrainian villages in eastern Europe to the Gaza strip in the middle-east.
As AI adoption grows worldwide, the advanced technology is no longer limited to our cellphones and air conditioners. In recent years, AI has been recruited in militaries and its prime examples have been the Russian-Ukraine and Israel- Hamas wars. As casualties run into several thousands, several human rights experts and activists have called out the increasing use of AI in illegal surveillance, deep fake videos and advanced military operations.
To discuss the adverse impact of AI use in the military, a two-day summit on “Responsible Use of Artificial Intelligence in the Military (REAIM) is taking place in Seoul, South Korea on 9-10 September. The summit will see the presence of representatives from over 90 countries inlcuding U.S. and China along with arms industry leaders who will brainstorm on ways to regulate the use of AI in the military.
What is the REAIM Summit in Seoul About?
The REAIM summit’s main focus of this 2-day event is to set global norms on using artificial intelligence in military welfare. It will also set a blueprint that aims to outline guidelines for AI responsible use and its governance.
The Seoul summit was co hosted by Singapore, the Netherlands, Kenya, and the United Kingdom. Ninety-plus countries including the U.S. and China send their official representatives to this summit. Moreover, over 2000 people registered to take part in this event showcasing the significance and hype of the event.
The summit was the second edition of the event as the first took place in the Netherlands, Hague in February 2023. The first summit was totally focused on the discussion about the use of AI in the military. However, the Seoul summit’s main focus was solutions and measures.
The main goal of the summit is to apply new norms on the use of AI in warfare, develop ideas for global governance of artificial intelligence in the military and develop more understanding of the positive and negative implications of it on global peace and security.
Benefits and Dangers of AI in the Military
There is a growing role of AI in military operations as the technology has immense potential in military activities and operations like inventory management, surveillance, reconnaissance, intelligence, battlefield, and logistic planning.
Militaries across the globe have started recognizing the power of AI and use it on the battlefield for crucial purposes such as collecting data from the battlefield, and analysis. This data, later on, plays an integral part in their operations and also helps in decision-making, raising solution awareness, making strategies, and decreasing civilian casualties.
As every coin has two sides, so does artificial intelligence. As AI has several benefits it has potential risks and dangers also. The misuse of AI in the military could lead to several negative implications and harm human life.
There are some major concerns like the use of AI in lethal autonomous weapons, illegal surveillance, intimidation, war robots, AI use in hacking, and cyber attacks among others. Besides these, it has global security concerns as AI could lead to an arms race and increase global tensions.
From Ukraine to Gaza: Real-Life Terminator Scenarios in Military AI
AI is turning real life into a sci-fi Hollywood movie, with recent examples being Gaza and Ukraine. They both become the first witnesses of AI wars.
As per the media investigations, Israel’s military has been using AI in its military operations in Gaza strip. The AI powered drones have been used for surveillance and detection of militants in the over-crowded Gaza strip.
Israel’s Autonomous Weapons and AI Targeting Systems
Israel has used two AI targeting systems known as “Lavender” and “The Gospel” to attack targets in Gaza. These systems are automated and lead to mass destruction and human loss, as witnessed in the past one year of war in Gaza.
Additionally, Israel has used several AI tools in the war including Lethal Autonomous Weapons (LAWS). The automated weapons are equipped with guns & missiles and technologies of facial recognition, biometric surveillance, and automated targeting systems.
However, utilizing these AI-powered weapons is totally against human rights, according to several activists.
Ukraine’s Vampire Drones
Similar to Israel, Ukraine’s army was caught testing a vampire drone in Donetsk in May 2024. Reuters report revealed that Ukraine is developing an AI-powered drone equipped with weapons and needs minimal human support. Furthermore, these AI-powered drones are already in use to fight the Russian army as several videos have emerged on the internet.
Conclusion
As Artificial intelligence is becoming an integral part of the military operation its chances of misuse are also increasing. The perfect example of the abuse and misuse of AI was Gaza and Ukraine. This shows why it is important to bring regulations and norms for the use of AI in military warfare.
Also Read: South Korea Summit Targets AI Blueprint for Military Use