How Israel is testing AI in war against Palestinians
The Israeli army launched a new strategy last year to integrate AI weapons and technology in all military branches - the most sweeping strategic transformation in decades. Last month, the Israeli defence ministry boasted that the army intends to become an AI "superpower" in the field of autonomous warfare.
"There are those who see AI as the next revolution in changing the face of warfare in the battlefield," retired army general Eyal Zamir told the Herzliya Conference, an annual security forum. Military applications could include "the ability of platforms to strike in swarms, or of combat systems to operate independently... and of assistance in fast decision-making, on a scale greater than we have ever seen".
Israel's defence industry is producing a vast array of autonomous military vessels and vehicles, including an "armed robotic vehicle" described as a "robust" and "lethal" platform featuring "automatic target recognition". An autonomous submarine for "covert intelligence gathering", dubbed the BlueWhale, has been in test trials.
If all this frightens the hell out of you, it should. Israel is creating not just one Frankenstein monster, but entire swarms of them capable of wreaking havoc, not only on their Palestinian targets but on anyone anywhere in the world.
Palestinians are the testing ground for such technologies, serving as a "proof of concept" for global buyers. Israel's most likely customers are countries embroiled in war. Though the weapons may offer a battlefield advantage, in the end they will surely increase the overall level of suffering and bloodshed among all participants. They will be able to kill in greater numbers with greater lethality. For that reason, they are monstrous.
New MEE newsletter: Jerusalem Dispatch
Sign up to get the latest insights and analysis on Israel-Palestine, alongside Turkey Unpacked and other MEE newsletters
Another new Israeli AI technology, Knowledge Well, not only monitors where Palestinian militants are firing rockets but can also be used to predict future attack locations.
While such systems may offer protection for Israelis from Palestinian weapons, they also enable an undeterred Israel to become a virtual killing machine, unleashing terrifying assaults against military and civilian targets while facing minimal resistance from its enemies.
Seek and destroy
Such technologies offer a warning to the world about how pervasive and intrusive AI has become. Nor is it reassuring when the Israel army's chief AI expert says he competes with the salaries offered in the private marketplace for AI specialists by providing "meaningfulness". As if this would somehow reassure, he adds that Israel's AI weapons will for "the foreseeable future... always [have] a person in the loop".
I leave you to ponder how killing Palestinians could be "meaningful". Nor is it likely that a human being will always control this battlefield weaponry. The future involves robots that can think, judge and fight autonomously, with little or no human intervention beyond initial programming. They have been described as the "third revolution in warfare after gunpowder and nuclear arms".
If human beings fighting on a battlefield can err so egregiously, how do we expect AI-operated weaponry and robots to do a better job?
While they may be programmed to seek out and destroy the enemy, who determines who the enemy is and makes life-and-death decisions on the battlefield? We already know that in war, humans make mistakes - sometimes terrible ones. Military programmers, despite their expertise in shaping what the armed robots will think and do, are no less prone to error. Their creations will feature huge behavioural unknowns, which could cost countless lives.
Palestine is one of the most surveilled places on earth. CCTV cameras are ever-present in the Palestinian landscape, which is overlooked by Israeli guard towers, some armed with remote-controlled robotic guns. Drones fly overhead, capable of dropping tear gas, firing directly on Palestinians below, or directing fire by personnel on the ground. In Gaza, the constant surveillance instils trauma and fear in residents.
In addition, Israel now has facial recognition apps, such as Blue Wolf, aiming to capture images of every Palestinian. These images are fed into a huge database that can be mined for any purpose. Software from companies such as Anyvision, capable of identifying huge numbers of individuals, is integrated with systems containing personal information, including social media posts.
It is a web of control instilling fear, paranoia and a feeling of hopelessness. As former Israeli army chief of staff, Rafael Eitan, once said, the goal is to make Palestinians "run around like drugged cockroaches in a bottle".
Frankenstein's monster
Many data researchers and privacy advocates have warned about the dangers of AI, both in the public sphere and on the battlefield. AI-powered military robots are but one of many examples, and Israel is at the forefront of such developments. It is Dr Frankenstein, and this technology is his monster.
Human Rights Watch has called for a ban on such military technology, warning: "Machines cannot understand the value of human life."
Israeli AI technology may be, at least in the eyes of its creators, intended for the protection and defence of Israelis. But the damage it inflicts fuels a vicious cycle of endless violence. The Israeli army and media outlets that promote such wizardry only create more victims - initially Palestinians, but later, every dictatorship or genocidal state that buys these weapons will produce its own set of victims.
Another AI "achievement" was the Mossad assassination of the father of Iran's nuclear programme, Mohsen Fakhrizadeh, in 2020. The New York Times offered this breathless account: "Iranian agents working for the Mossad had parked a blue Nissan Zamyad pickup truck on the side of the road... In the truck bed was a 7.62-mm sniper machine gun... The assassin, a skilled sniper, took up his position, calibrated the gun sights, cocked the weapon and lightly touched the trigger.
"He was nowhere near Absard [in Iran], however. He was peering into a computer screen at an undisclosed location more than 1,000 miles away...[This operation was] the debut test of a high-tech, computerized sharpshooter kitted out with artificial intelligence and multiple-camera eyes, operated via satellite and capable of firing 600 rounds a minute.
"The souped-up, remote-controlled machine gun now joins the combat drone in the arsenal of high-tech weapons for remote targeted killing. But unlike a drone, the robotic machine gun draws no attention in the sky, where a drone could be shot down, and can be situated anywhere, qualities likely to reshape the worlds of security and espionage."
We know the dangers posed by autonomous weaponry. An Afghan family was brutally killed in a US drone strike in 2021 because one of its members had been misidentified as a wanted terrorist. We know that the Israeli army has repeatedly killed Palestinian civilians in what it has labelled battlefield "mistakes". If human beings fighting on a battlefield can err so egregiously, how do we expect AI-operated weaponry and robots to do a better job?
This should raise an alarm about the devastating impacts that AI will surely have in the military realm, and about Israel's leading role in developing such unregulated lethal weapons.
The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Middle East Eye.
This article is available in French on Middle East Eye French edition.
Middle East Eye delivers independent and unrivalled coverage and analysis of the Middle East, North Africa and beyond. To learn more about republishing this content and the associated fees, please fill out this form. More about MEE can be found here.