Facebook has started deploying its artificial intelligence capabilities to help combat terrorists’ use of its service.
Company officials said in a blog post on Thursday that Facebook will use AI in conjunction with human reviewers to find and remove “terrorist content” immediately before other users see it. Such technology is already used to block child pornography from Facebook and other services such as YouTube, but Facebook had been reluctant about applying it to other potentially less clear-cut uses.
In most cases, Facebook only removes objectionable material if users first report it.
Facebook and other internet companies face growing government pressure to identify and prevent the spread of terrorist propaganda and recruiting messages on their services. Earlier this month, British Prime Minister Theresa May called on governments to form international agreements to prevent the spread of extremism online. Some proposed measures would hold companies legally accountable for the material posted on their sites.
The Facebook post — by Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager — did not specifically mention May’s calls. But it acknowledged that “in the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online.”
“We want to answer those questions head on. We agree with those who say that social media should not be a place where terrorists have a voice,” they wrote.
Among the AI techniques used in this effort are image matching, which compares photos and videos people upload to Facebook to “known” terrorist images or video. Matches generally mean that either that Facebook had previously removed that material, or that it had ended up in a database of such images that Facebook shares with Microsoft, Twitter and YouTube.
Facebook is also developing “text-based signals” from previously removed posts that praised or supported terrorist organisations. It will feed those signals into a machine-learning system, over time, will learn how to detect similar posts.
Bickert and Fishman said that when Facebook receives reports of potential “terrorism posts,” it reviews those reports urgently. In addition, it says that in the rare cases when it uncovers evidence of imminent harm, it promptly informs authorities.
But AI is just part of the process. The technology is not yet at the point where it can understand nuances of language and context, so humans are still in the loop.
Facebook says it employs more than 150 people who are “exclusively or primarily focused on countering terrorism as their core responsibility.” This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts and engineers, according to the blog post.