Aller au contenu principal

CIA used image of Hajj pilgrims to showcase surveillance and AI capabilities

Photo appeared in presentation on how cloud computing is helping spy agency 'stay one step ahead of the enemy'
The image appeared in a presentation by a senior official in the CIA’s Digital Innovation Directorate (Screengrab)
Par Simon Hooper à London et Umar A Farooq à Washington

The CIA used a photo of pilgrims attending the Hajj to illustrate the potential capabilities of new surveillance and artificial intelligence technologies, Middle East Eye can reveal.

Digital rights and Muslim civil society organisations said the use of the photo highlighted grave concerns about fast-developing tools such as facial recognition software and was part of a pattern of Islamophobia within intelligence and law enforcement agencies in which Muslims were portrayed as a threat.

The image appeared in a presentation by a senior official in the CIA’s Digital Innovation Directorate on how the spy agency’s shift to cloud-based technologies was transforming its intelligence-gathering capabilities.

Speaking at a public sector conference organised by Amazon Web Services (AWS) in 2018, Sean Roche said: “The age of expeditionary intelligence means going to very unfriendly places very quickly to solve very tough problems.”

He said that small teams of programmers, data scientists and analysts “coding in the field” had delivered “amazing capabilities in the case of finding people we care about”.

“Knowing who they are, what they’re doing, their intentions, where they are,” said Roche, the CIA’s then-associate deputy director for digital innovation.

The presentation then showed a photograph of pilgrims gathered in an outer precinct of Mecca’s Masjid al-Haram, the holiest place in Islam and the site of the Kaaba.

The photo appears to be a stock image from a photography website taken during the Hajj in January 2017. But the image has been edited with a yellow circle added to highlight the face of a man in the crowd.

De facto targets

MEE has not identified the man. There is no suggestion he is a subject of interest to the CIA.

MEE asked the CIA whether it had the capabilities to deploy surveillance technologies to monitor people attending the Hajj, and whether it would do so during this year’s pilgrimage which begins on Monday. But the CIA did not respond to MEE’s questions.

But the use of the image has prompted concerns among Muslim advocacy organisations and legal experts on surveillance technologies. 

'Muslims should not be used as the de facto example of how government technology can be deployed'

- Edward Mitchell, CAIR

Edward Mitchell, the national deputy director of the Council on American-Islamic Relations (CAIR), told MEE there was a long history of Muslims being depicted as a threat in government training material and presentations.

“Muslims should not be used as the de facto example of how government technology can be deployed. That is especially true of Muslims engaging in worship during the Hajj pilgrimage,” said Mitchell.

Ashley Gorski, senior staff attorney with the American Civil Liberties Union’s National Security Project told MEE: “Facial recognition technology poses grave risks to privacy and civil liberties. People have the right to pray and worship freely, without fear that they’re being tracked by the government.

“This is yet another example of how US intelligence agencies promote surveillance tools as a way of monitoring and controlling religious communities, even abroad.”

Machine learning

Roche went on to describe how the agency was deploying artificial intelligence to collect and process data.

“We are doing machine learning for what? Our job is humans,” said Roche.

“So we are taking those databases that exist about you. The data that is already there. The data that is structured and unstructured, some data that is being created, and aggregating that data very quickly in the cloud environment to build a digital signature, to understand our digital self.”

Digital authoritarianism: The rise of electronic armies in the Middle East
Lire

Roche’s presentation to the AWS Public Sector Summit came after the CIA signed a $600m contract for cloud computing services with the technology giant in 2014.

Introducing Roche, Teresa Carlson, then a senior executive at AWS, said he would talk about how the company had “empowered the CIA to maintain their security posture and increase the pace of innovation to stay one step ahead of the enemy at all times”.

Roche left the CIA in 2019 and is now national security director at AWS. A spokesperson for AWS declined to comment.

Jumana Musa, director of the Fourth Amendment Center at the National Association of Criminal Defense Lawyers (NACDL), which advises lawyers in cases involving new surveillance tools, said the presentation highlighted questions about how technology was being used beyond US borders where constitutional protections did not apply.

“Historically, the US government has definitely a different set of standards when they consider themselves to be outside the US collecting intelligence information rather than evidence for a prosecution. And the rules are much looser," Musa told MEE.

Facial recognition

Clare Garvie, a privacy lawyer working at NACDL, with a focus on facial recognition technology, said: “It's not just business as usual to be able to scan a crowd of a hundred thousand people or more and purport to be able to identify who they are.

'It's not surprising for the CIA to certainly see an appeal in that extremely powerful surveillance mechanism'

- Clare Garvie, privacy lawyer

"It's not surprising for the CIA to certainly see an appeal in that extremely powerful surveillance mechanism."

Addressing the potential dangers of AI technology, Roche said: “Some people are worried about AI. Don’t be.”

Quoting German futurist Gerd Leonhard, he said: “Human flourishing must remain the core objective of all technological progress. Humanistic futurism. The machines will not be taking over.”

Last month, dozens of AI experts signed a public statement warning that rapidly developing technology posed an existential threat to humanity.

Risks highlighted by the Center for AI Safety, which published the statement, included weaponisation, and the use of the technology to “enforce narrow values through pervasive surveillance and oppressive censorship”.

Other AI experts have said concerns about the technology are exaggerated.

This article is available in French on Middle East Eye French edition.

Middle East Eye propose une couverture et une analyse indépendantes et incomparables du Moyen-Orient, de l’Afrique du Nord et d’autres régions du monde. Pour en savoir plus sur la reprise de ce contenu et les frais qui s’appliquent, veuillez remplir ce formulaire [en anglais]. Pour en savoir plus sur MEE, cliquez ici [en anglais].