Facebook: How we remove Islamic State and other 'terrorism' content
Facebook on Thursday offered additional insight into its efforts to remove terrorism content, a response to political pressure in Europe against militant groups using the social network for propaganda and recruiting.
Facebook has ramped up its use of artificial intelligence such as image-matching and language-understanding to identify and remove content quickly, Monika Bickert, Facebook's director of global policy management, and Brian Fishman, counter-terrorism policy manager, said in a blog post.
As the world's largest social media network, with 1.9 billion users, Facebook has not always been so open about its operations, and its statement was met with scepticism by some who have criticised US technology companies for moving slowly.
"We've known that extremist groups have been weaponising the internet for years," said Hany Farid, a Dartmouth College computer scientist who studies ways to stem extremist material online.
"So why, for years, have they been understaffing their moderation? Why, for years, have they been behind on innovation?" Farid asked. He called Facebook's statement a public relations move in response to European governments.
Britain's interior ministry welcomed Facebook's efforts but said technology companies needed to go further.
"This includes the use of technical solutions so that terrorist content can be identified and removed before it is widely disseminated, and ultimately prevented from being uploaded in the first place," a ministry spokesman said on Thursday.
Germany, France and Britain, countries where civilians have been killed in bombings and shootings by militants in recent years, have pressed Facebook and other providers of social media, including Google and Twitter, to do more to remove militant content and hate speech.
Government officials have threatened to fine Facebook and strip broad legal protections it enjoys against liability for content posted by its users.
Facebook uses artificial intelligence for image-matching that allows the company to see if a photo or video being uploaded matches a known photo or video from organisations it has defined as terrorist, such as the Islamic State (IS) group, al-Qaeda and their affiliates, the company said in the blog post.
YouTube, Facebook, Twitter and Microsoft last year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.
Similarly, Facebook now analyses text that has already been removed for praising or supporting militant organisations to develop text-based signals for such propaganda.
"More than half the accounts we remove for terrorism are accounts we find ourselves; that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists," Bickert said in a telephone interview.
Asked why Facebook was opening up now about policies that it had long declined to discuss, Bickert said recent attacks were naturally starting conversations among people about what they could do to stand up to militancy.
New MEE newsletter: Jerusalem Dispatch
Sign up to get the latest insights and analysis on Israel-Palestine, alongside Turkey Unpacked and other MEE newsletters
Middle East Eye delivers independent and unrivalled coverage and analysis of the Middle East, North Africa and beyond. To learn more about republishing this content and the associated fees, please fill out this form. More about MEE can be found here.