Autonomous AI weapons in war – inhumane and contrary to international law

0
208
An MQ-1 Predator, armed with AGM-114 Hellfire missiles, piloted by Lt. Col. Scott Miller on a combat mission over southern Afghanistan. Ph. by Lt. Col. Leslie Pratt - afrc.af.mil, Public Domain, https://commons.wikimedia.org/w/index.php?curid=68261178

by Anno Hellenbroich

Recently, the FAZ reported in a concise article that the assassination attempt on Iranian nuclear scientist Mohsen Fakrizadeh in November 2020 was most likely triggered by A.I. controlled weapons (hail of bullets). The NYT reported similar suspicions back in October 2021. This is currently very significant, as increasing political pressure is being exerted on industry and the military in the context of the Russian-Ukrainian war to use A.I. weapons on a large scale – including in the expansion of Ukrainian attacks on Russian territory. The article by two lawyers from Munich, Professor Dr. Thomas Klindt and Dr. Lennart Laude, in the FAZ (22 August 2024) provides an indication of how controversial this idea is. The text was based on a lecture given by both law specialists at the Admiral Staff Officer Course 2024 of the Bundeswehr Command and Staff College as part of the AI Navy Strategy Conference.

The headline in the FAZ reads: “In the gray area of war, what international law says about autonomous weapons systems.” At the beginning of this article it says: “Weapon systems with autonomous functions are being used from Ukraine to the Gaza Strip. The development of autonomous weapons systems is also increasing, and artificial intelligence will play a decisive role in this. The law is not prepared for this development.” The authors examine the question of whether lethal, fully autonomous weapons (lethal autonomous weapon systems-LAWS) will be used to kill people in the future without being controlled by humans. This has not been clarified at all under international law and, in my opinion, is incompatible with the much-vaunted Western value system.

The authors pose the question: Is it legally tolerable to place the decision over the life and death of a human being solely in the hands of a machine? They refer to the only regulation of artificial intelligence in the world to date by the EU (AI Regulation, AI-Verordnung), which deliberately omitted military AI systems. The authors emphasize that this exclusion was highly controversial in terms of legal and industrial policy. Among other things, the current situation in the Gaza Strip is discussed, in which facial recognition software is used, but civilian casualties are tolerated, as these autonomously acting systems cannot distinguish civilians from combatants. This applies in particular to systems that operate completely without any human influence (“human-out-of-the-loop”). The authors come to the conclusion that the offensive use of human-out-of-the-loop as a means of attack is incompatible with international humanitarian law. They also point to another prohibition under international law as well as under general humane law, namely the killing of people by an algorithm-controlled machine without any human involvement, which is completely contrary to our value system of human dignity.

The risk becomes clear when it comes to integrating AI into nuclear weapons, the lawyers emphasized during the admiral’s course. This is all the more significant as, in view of the global test of strength between the major powers USA, Russia and China, there is currently increasing talk in various international publications of the growing risk of the use of nuclear weapons. (For example in the Boston Globe of July 26: The unacceptable, growing risk of nuclear war, which refers to a recently published book by investigative journalist Anni Jacobs “Nuclear War: A Scenario”. Or a NYT article from August 24: “Biden’s secret nuclear strategy” by David. E. Sanger). It is true that the lawyers limit the current threat of global danger from AI-controlled nuclear weapons by referring to the decisions made by the United States in 2022 on the use of nuclear weapons, which are always made by the “human-in-the-loop”. (Although the nuclear doctrine recently reformulated by the Biden administration, which was reported in the NYT, is not yet known for reasons of secrecy). The authors emphasize that the UK adopted a similar nuclear policy in 2022, but note that “Russia and China … have not done so”.

However, the question for me is, if there is no communication at all, no listening by the major powers USA, China or Russia about a new global security architecture and their respective interests, how can a nuclear war be avoided? The authors emphasize that “for the Bundeswehr, the offensive deployment of completely autonomous weapons systems…would very probably be ruled out under international law….(But what) if other states develop such weapons systems…and the Bundeswehr is thus confronted with a military disadvantage?”

A nightmare.

_________________________________________________________________________

Pubblicazione gratuita di libera circolazione. Gli Autori non sono soggetti a compensi per le loro opere. Se per errore qualche testo o immagine fosse pubblicato in via inappropriata chiediamo agli Autori di segnalarci il fatto e provvederemo alla sua cancellazione dal sito

LEAVE A REPLY

Please enter your comment!
Please enter your name here