Facebook dabbling with 'Artificial Intelligence' to remove terrorist content

Adjust Comment Print

That means if someone posts a picture of a known terrorist, Facebook's software can match it to, for example, a propaganda video from ISIS, or other images or videos from extremist content Facebook has already removed. Other questions, he said, include: "Is social media good for democracy?"

"This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too". Recent attacks have made people question what we should be doing to stand up against radicalisation but our commitment is long standing.

The post concludes: "We want Facebook to be a hostile place for terrorists".

"Our stance is simple: There's no place on Facebook for terrorism", Facebook's Director of Global Policy Management Monika Bickert and Counterterrorism Policy Manager Brian Fishman wrote in a blog post.

Moreover, Facebook noted that partnerships with other technology companies, civil society and governments are crucial for tackling terrorist content because "terrorists can jump from platform to platform".

Earlier this week in Paris, the British prime minister and the president of France launched a joint campaign to ensure the internet could not be used as a safe space for terrorists and criminals. "We are now focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course".

Artificial intelligence will largely be used in conjunction with human moderators who review content on a case-by-case basis. When material is identified and removed, algorithms "fan out to try to identify related material that may also support terrorism".

Police to charge Erdogan guards who beat up protesters in Washington
The police statement did not say if the men were supporters of Erdogan , part of his security detail or protesters. Nine people were hospitalized . "The police did the best they could under the circumstances", Newsham said.

But it also says that already more than half of the accounts that it removes for supporting terrorism are ones that it finds itself.

AI, Facebook says, is also useful for identifying and removing "terrorist clusters".

Over the past year Facebook has increased its team of counterterrorism experts and now has more than 150 people primarily dedicated to that role.

It's also collaborating with fellow technology companies and consulting with researchers to keep up with the ever-changing social media tactics of the Islamic State and other terror groups.

Facebook's public discussion of its counterterrorism efforts is the first in a new series of efforts from the company to bring more transparency and discussion to the myriad of problems that Facebook has either created or contributed to. Some government agencies, including the U.S. Federal Bureau of Investigation and the U.K. Home Office, have called on tech companies to ensure that law enforcement can access encrypted messages.

Facebook has partnered with Microsoft, YouTube and Twitter to develop and shared industry database of "hashes" or digital fingerprints of terrorist content. In their post, Bickert and Fishman said encryption was essential for journalists, aid workers and human rights campaigners as well as keeping banking details and personal photos secure from hackers.

Facebook has revealed it is using artificial intelligence in its ongoing fight to prevent terrorist propaganda from being disseminated on its platform.

Comments