Skip to main content

War on Gaza: European AI Act must be expanded to protect Palestinians

EU legislation's failure to regulate exports allows Israel to continue using Europe's artificial intelligence systems for its lethal assault on Gaza
Smoke rises after an Israeli strike in Nuseirat, Gaza, on 20 July 2024 (Eyad Baba/AFP)
Smoke rises after an Israeli strike in Nuseirat, Gaza, on 20 July 2024 (Eyad Baba/AFP)

This month, the European Union AI Act came into force, aimed at fostering "responsible artificial intelligence development and deployment in the EU".

Adopted earlier this year, the act was praised as a major step forward in regulating artificial intelligence. It aims to reduce the risks posed by AI systems, banning those deemed too dangerous and enforcing strict safeguards for high-risk applications.

But while this legislation seeks to protect citizens within the EU, it falls short of addressing the international implications, particularly regarding AI technologies used in Israel’s occupation of Palestinian territories and the ongoing genocide in Gaza, where AI is being deployed for surveillance, targeted strikes and population control.

Since the start of the Israeli war on Gaza, more evidence has emerged of Israel's use of automated warfare tactics.

The potential influence of the EU AI Act on AI deployment in Israel and its consequences for Palestinian human rights, particularly in areas of surveillance, law enforcement and automated warfare, have been examined by 7amleh - the Arab Centre for Social Media Advancement, an organisation defending the digital rights of Palestinians. 

New MEE newsletter: Jerusalem Dispatch

Sign up to get the latest insights and analysis on Israel-Palestine, alongside Turkey Unpacked and other MEE newsletters

According to a position paper from 7amleh, “the Israeli government deploys AI systems to aid its occupation of the occupied Palestinian territory and control the movements of Palestinians and subject them to invasive surveillance”.

Palestinians face daily intrusions from invasive AI technologies deployed by Israel, such as facial recognition systems, smart cameras and sensors, and predictive policing algorithms, which infringe on their rights to privacy, non-discrimination, and freedom of movement.

Notorious

The current Israeli war on Gaza underscores the escalating use of AI in automated warfare, including systems such as "Gospel", "Lavender" and "Where’s Daddy?" - a trend that has exacerbated the high casualty rate among civilians.

These technologies, combined with reduced human oversight, have contributed to the massive death toll and destruction of homes. The increased reliance on AI for targeting decisions raises serious ethical concerns, as the rapid, large-scale identification of targets often leads to insufficient due diligence and errors.


Follow Middle East Eye's live coverage of the Israel-Palestine war


Israel's military-industrial complex is notorious for developing, deploying and exporting these technologies, often testing them in occupied Palestinian territories. Moreover, Israel imports AI technologies from EU-based companies, reinforcing its occupation and systemic oppression of Palestinians.

The EU AI Act categorises AI systems by risk, prohibiting those with unacceptable risks - like those using subliminal techniques or exploiting vulnerabilities - and enforcing strict requirements for high-risk systems, such as facial recognition and predictive policing.

'All Eyes on Rafah': How a viral campaign exposed an unfolding AI war
Read More »

However, it notably exempts AI used for military, defence or national security purposes from regulation, despite the significant human rights implications.

One glaring oversight is the act’s failure to regulate the export of AI systems by EU companies to non-EU countries, including Israel. This means EU firms can sell AI technologies, banned or highly regulated within the EU, to Israel without adequate safeguards.

Consequently, facial recognition, predictive policing and AI systems used in automated warfare can be exported to Israel, potentially worsening human rights violations against Palestinians.

For example, during the current Gaza conflict, Israel's AI-driven military targeting has led to high civilian casualties, with AI systems selecting targets for automated attacks.

In the occupied West Bank, extensive use of facial recognition technology restricts Palestinian movement, violating their rights to privacy and freedom of movement.

These deployments, unchecked and unscrutinised, pose severe threats to Palestinian rights.

Broad exemptions

The act also makes troubling allowances for national security and law enforcement applications. Despite calls from civil society to ban invasive technologies like facial recognition and predictive policing, the act classifies these as high-risk instead of prohibiting them outright.

EU law enforcement agencies can use real-time remote facial recognition in public spaces under specific conditions, such as preventing imminent threats or locating victims.

However, these conditions are open to interpretation and potential abuse under the guise of security and counter-terrorism.

The act also permits retrospective facial recognition, which can identify individuals after the fact, disproportionately affecting people of colour, including Palestinians.

Furthermore, law enforcement agencies are exempt from publishing details of the AI systems they use, diminishing accountability. These broad exemptions for national security undermine the act’s intended safeguards, risking misuse of AI technologies against Palestinian and pro-Palestine activists in the EU.

To mitigate these concerns, several actions are essential for EU policymakers, civil society organisations and AI providers.

Firstly, it is crucial to expand the scope of the EU AI Act to regulate the export of AI systems to non-EU countries, ensuring that EU-made technologies do not contribute to human rights violations abroad. This includes establishing safeguards for high-risk systems and banning the export of AI technologies prohibited within the EU.

Additionally, stricter requirements must be imposed on high-risk AI systems, particularly those used in surveillance and law enforcement.

Ethical AI development

Transparency and accountability measures should be mandated, including a requirement for human rights impact assessments to be conducted and made publicly available, as well as banning invasive technologies such as predictive policing and facial recognition.

Moreover, it is essential to narrow national security and law enforcement exemptions by establishing clear guidelines and oversight mechanisms to ensure that AI systems deployed for security purposes do not infringe on fundamental rights.

Israel-Palestine war: How Israel uses AI genocide programme to obliterate Gaza
Read More »

Protecting Palestinian digital rights is another key priority. EU policymakers and civil society organisations should advocate for these rights by ensuring that AI technologies do not entrench occupation and oppression, including monitoring AI deployment in occupied territories and holding companies accountable for their role in human rights abuses.

Finally, promoting ethical AI development is crucial, with companies adopting guidelines that prioritise human rights and transparency.

This includes conducting thorough risk assessments, engaging with stakeholders and ensuring that technologies do not contribute to human rights violations.

The EU AI Act is a significant step towards regulating artificial intelligence and mitigating its risks. However, its current scope and provisions fall short of addressing the complex challenges posed by AI technologies in contexts such as the Israeli occupation of Palestinian territories.

By expanding the act's scope to regulate exports, strengthening safeguards for high-risk systems and closing national security loopholes, the EU can ensure that its AI regulation framework protects human rights within and beyond its borders.

Policymakers, civil society organisations and companies must work together to promote ethical AI development and safeguard the rights of vulnerable populations, including Palestinians, in the face of advancing technologies.

The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Middle East Eye.

Taysir Mathlouthi is the EU advocacy officer at 7amleh. She holds five master's degrees in human rights/international relations, media and communications, war studies, political science, and filmmaking. She previously worked for Unicef on online misinformation and disinformation analysis.
Middle East Eye delivers independent and unrivalled coverage and analysis of the Middle East, North Africa and beyond. To learn more about republishing this content and the associated fees, please fill out this form. More about MEE can be found here.