Skip to main content

AI drone may have 'hunted down' and killed soldiers in Libya, UN experts say

Top humanitarians, including the president of the ICRC and UN Secretary-General Antonio Guterres, are advocating for such 'killer robots' to be banned
Fighters loyal to Libyan General Khalifa Haftar patrol the streets in the eastern city of Benghazi during a state of emergency to combat Covid-19, on 21 March (AFP/File photo)

An armed drone that attacked forces loyal to Libya's eastern commander Khalifa Haftar last year may have been controlled by artificial intelligence, not a human operator, according to a recent report commissioned by the United Nations.

The military-grade autonomous drone, known as a lethal autonomous weapons system - or LAWS - fired at forces belonging to Haftar as they were fleeing a battle, the 548-page report, which was published in March, said.

"Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems," it added. 

According to the report that was written by a panel of independent experts, the drone was powered by AI and used in March 2020 by forces loyal to the internationally recognised government.

The device, a Kargu-2 quadcopter, was produced by the Turkish military tech company, STM. 

New MEE newsletter: Jerusalem Dispatch

Sign up to get the latest insights and analysis on Israel-Palestine, alongside Turkey Unpacked and other MEE newsletters

'Fire, forget and find'

The drone is reported to have "hunted down" Haftar's fleeing fighters while "remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems", the UN report said.  

Haftar, who controls much of the country's east, including lucrative oil fields, had launched an operation in 2019 to seize control of the capital but it ultimately failed, leading to a stalemate with the internationally recognised government.

According to the report, the AI systems "were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability'."

'Current machine learning-based systems cannot effectively distinguish a farmer from a solider'

- Zachary Kallenborn, Bulletin of the Atomic Scientists

"Fire, forget and find" is a phrase in reference to a weapon's ability to find its target and attack on its own after being fired.

The UN's report did not say explicitly that the combat drone killed anyone, but if it did, it would represent a significant first in human warfare, Zachary Kallenborn, a research affiliate who studies the use of military drones, said recently in the Bulletin of the Atomic Scientists.

"If anyone was killed in an autonomous attack, it would likely represent an historic first known case of artificial-intelligence-based autonomous weapons being used to kill," Kallenborn wrote.

Warning of the dangers of such systems, Kallenborn stressed that some of the leading scientists and research developers - from the late Steven Hawking to Elon Musk - said they want these sorts of weapons banned.

"Current machine learning-based systems cannot effectively distinguish a farmer from a solider," Kallenborn wrote. "Farmers might hold a rifle to defend their land, while soldiers might use a rake to knock over a gun turret. But even adequate classification of a vehicle is difficult too, because various factors may inhibit an accurate decision."

Not new

While human-operated drones have been used in military strikes for over a decade, until now the decision to target and strike was always made deliberately by human commands. However, loitering munitions, which are simpler autonomous weapons that are designed to hover on their own over an area before crashing into a target, are not new. 

After reading the UN's report, Ulrike Franke, a senior policy fellow at the European Council on Foreign Relations, questioned whether the AI drone used in the Libya attack was really all that more sophisticated than other known cases in which loitering munitions have been used. 

'We must decide what role we want human beings to play in life-and-death decisions during armed conflicts'

- Peter Maurer, ICRC President

"I must admit, I am still unclear on why this is the news that has gotten so much traction," Franke wrote on Twitter, noting that loitering munitions have been used in combat for "a while", and questioning whether the autonomous weapon used in Libya actually caused any casualties.

According to Franke, the LAWS that the UN experts described did not seem more advanced than other loitering munitions being used in other conflicts. 

"It seems to me that what's new here isn't the event, but that the UN report calls them lethal autonomous weapon systems," Franke said. 

Middle East Eye reached out to Lipika Majumdar Roy Choudhury, the coordinator of the UN report, for clarification on the differences documented between the LAWS used in Libya last year and other loitering munitions systems, but did not receive a response by the time of this article's publication.

'Morally repugnant, politically unacceptable'

Meanwhile, a ceasefire in Libya that was brokered in October continues to hold, an encouraging sign that the conflict may be coming to an end. 

Still, according to the UN report, it was Turkey that supplied the lethal AI to Tripoli, raising fears of the proliferation of such weapons in other conflicts. 

Last month, the International Committee of the Red Cross (ICRC) called on world governments to ban such fully autonomous weapons.

Will US drone sales to the UAE clip Chinese wings in the Middle East?
Read More »

"Armed forces, in search of ever-increasing speed of attack and deploying increasing numbers of armed robots, are looking to autonomy for military advantage. At the same time, this alters the role of humans in decisions to use force. In this new era of machine learning software that writes its own rules, many fear that this is a dangerous prospect for civilian protection and for international security," ICRC President Peter Maurer said during a virtual briefing in May. 

"These developments prompt us to ask not just what these technologies can be used for, but what they should be used for," Maurer continued. 

"They prompt us to make responsible choices about the future of warfare. Ultimately, we must decide what role we – as a society – want human beings to play in life-and-death decisions during armed conflicts."

In 2018, United Nations Secretary-General Antonio Guterres called on states to prohibit AI weapons systems that could target and attack people on their own, calling them "morally repugnant and politically unacceptable". 

According to Human Rights Watch, dozens of countries have expressed support for moving towards a ban or at least some sort of restrictions on autonomous weapons, "but major military powers – most notably Russia and the United States – have repeatedly thwarted moves to begin negotiations, arguing it is 'premature' to attempt regulation".

Middle East Eye delivers independent and unrivalled coverage and analysis of the Middle East, North Africa and beyond. To learn more about republishing this content and the associated fees, please fill out this form. More about MEE can be found here.