Facebook Removed 2.5 Million Pieces of Hate Speech in the First Quarter

What's the score

According to the numbers, covering the six-month period from October 2017 to March 2018, Facebook's automated systems remove millions of pieces of spam, pornography, graphic violence and fake accounts quickly - but that hate-speech content, including terrorist propaganda, still requires extensive manual review to identify.

The company also removed 21 million pieces of content that contained adult nudity or sexual activity, flagging nearly 96 per cent of the content with its own systems.

The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, according to the report. The report said the company had cracked down on 837 million posts for spam, 21 million pieces of content for adult nudity or sexual activity and 1.9 million for promoting terrorism.

It also said Facebook "disabled" about 583 million fake accounts in Q1 - "most of which were disabled within minutes of registration". Facebook has gone under intense scrutiny over issues of disinformation campaigns from Russian trolls, as well as a data scandal involving 87 million people.

It attributed the increase to the enhanced use of photo detection technology.

On Tuesday morning, Facebook released its Community Standards Enforcement Preliminary Report, providing a look at the social network's methods for tracking content that violates its standards, how it responds to those violations, and how much content the company has recently removed.

Now, however, artificial intelligence technology does much of that work. "We tend to find and flag less of it, and rely more on user reports, than with some other violation types". Which is to say, this doesn't mean that 0.22% of the content posted on Facebook contained graphic violence; just that the graphic content posted accounted for 0.22% of total views.

Over the previous year, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000.

The bulk of the posts were found and flagged by the firm before users reported it to Facebook, driven by improvements in artificial intelligence technology. "While not always flawless, this combination helps us find and flag potentially violating content at scale before many people see or report it".

Facebook plans to continue publishing new enforcement reports, and will refine its methodology on measuring which bad content circulates over the platform.

"It may take a human to understand and accurately interpret nuances like... self-referential comments or sarcasm", the report said, noting that Facebook aims to "protect and respect both expression and personal safety". Facebook still estimates that fake profiles represent 3 per cent to 4 per cent of monthly active users.

Rosen also said that Facebook blocks millions of fake account attempts every day from even attempting to register, but did not specify how many.

While AI is getting more effective at flagging content, Facebook's human reviewers still have to finish the job.