Today, Facebook released its first ever Community Standards Enforcement Report to the public. This report includes a preliminary inventory of rule-violating content and the removal action Facebook took on it.

The report, which was included in the company’s overall Transparency Report, largely covers the content in violation of Facebook’s Community Standards that was discovered and removed from October 2017 to March 2018. 

It focuses on content that falls into six key categories:

  1. Graphic Violence
  2. Adult Nudity and Sexual Activity
  3. Terrorist Propaganda (ISIS, al-Qaeda and affiliates)
  4. Hate Speech
  5. Spam
  6. Fake Accounts

Earlier this year, Facebook published its content moderation and internal Community Standards guidelines, in hopes of shedding light on why certain items are removed from the network. In the context of this newly released report, it was perhaps an anticipatory move prior to publishing content removal figures.


Free Search Engine Submission

Here’s a look at the nature and extent of content removal in the above six categories.

Facebook Publishes Its First-Ever Community Standards Enforcement Report

1. Graphic Violence

Facebook either removed or placed warning labels on roughly 3.5 million pieces of violent content in Q1 2018, 86% of which was flagged by its artificial intelligence (AI) system before anyone reported it to Facebook.

The Community Standards Enforcement Report includes an estimate of the total percentage of content views consisted of graphic violence. For instance, out of all content viewed on Facebook in Q1 2018, the company reports that somewhere between 0.22% and 0.27% violated standards for graphic violence.

Screen Shot 2018-05-15 at 11.04.29 AM

Source: Facebook

That’s up from the estimated 0.16% to 0.19% in Q4 2017 — “despite improvements in our detection technology in Q1 2018.” The explanation for that, the report says, is simply due to a greater amount of content of this nature published on Facebook.

Additionally, the 3.5 million pieces of content within this category — on which Facebook took action — is also an increase. The volume of content detected in Q1 2018 rose from 1.2 million in Q4 2017.

So while there was likely an overall increase in the content of this type shared on the network, the growth in the number on which Facebook took action is probably due, the report says, to improvements in its AI detection systems.

2. Adult Nudity and Sexual Activity

Facebook removed 21 million pieces of content containing adult nudity and sexual activity in Q1 2018 — 96% of that was discovered by its AI technology before it was reported.

It’s predicted that 0.07% to 0.09% of all content viewed on Facebook in Q1 2018 violated standards for adult nudity and sexual activity in Q1 2018 — so, roughly seven to nine views out of every 10,000.

Screen Shot 2018-05-15 at 11.15.32 AM

Source: Facebook

That’s an increase from six to eight views in the previous quarter, which is too small for Facebook to account for what might be causing it. In the previous quarter, as well, Facebook took action on a similar number of content pieces within this category.

3. Terrorist Propaganda

Facebook doesn’t currently have statistics on the prevalence of terrorist propaganda on its site — but it does report that it removed 1.9 pieces of such content from the network in Q1 2018.

Screen Shot 2018-05-15 at 11.21.19 AM

Source: Facebook

Facebook’s interception of 1.9 million pieces of extremist content is up more than 72% in the previous quarter, when 1.1 million pieces of terrorist propaganda content was removed.

Again, Facebook credits its AI detection systems for this increase — 99.5% of such content removed in Q1 2018 was removed by these systems, compared to 96.9% in Q4 2017.

Facebook classifies terrorist propaganda as that which is “specifically related to ISIS, al-Qaeda and their affiliates.”

4. Hate Speech

One of Facebook’s boasting points in this report is the fact that its artificial intelligence systems were responsible for flagging and removing a good portion of the standards-violating content in many of these categories.

But when it comes to hate speech, writes Facebook VP of Product Management Guy Rosen in a statement, “our technology still doesn’t work that well.”

Human review is still necessary to catch all instances of hate speech, Rosen explains, echoing many of the statements made about AI ethics during F8, Facebook’s annual developer conference.

Not only is hate speech nuanced, but because humans (who train the artificially intelligent machines designed to help moderate content) have their own implicit biases, that can sometimes cause flaws in the way something as relatively subjective as hate speech is flagged.

Nonetheless, Facebook removed 2.5 million pieces of hate speech in Q1 2018, 38% of which was flagged by AI technology. It does not currently have statistics on the prevalence of hate speech within all content viewed on the site.

Screen Shot 2018-05-15 at 11.23.31 AM

Source: Facebook

5. Spam

Facebook defines spam as “inauthentic activity that’s automated (published by bots or scripts, for example) or coordinated (using multiple accounts to spread and promote deceptive content).”

It represents another category for which Facebook does not currently have exact figures of prevalence, as it says it’s still “updating measurement methods for this violation type.”

However, the report says that 837 million pieces of spam content were removed in Q1 2018 — a 15% increase from Q4 2017.

Screen Shot 2018-05-15 at 11.39.38 AM

Source: Facebook

6. Fake Accounts

“The key to fighting spam,” writes Rosen, “is taking down the fake accounts that spread it.”

Facebook removed roughly 583 million fake accounts in Q1 2018 — a decrease of over 30% — many of which were voided almost immediately after they were registered.

And despite these efforts, the company estimates that somewhere between 3-4% of all active accounts on Facebook during Q1 2018 were fake. 

As for the decrease in fake account removal from the previous quarter, Facebook points to “external factors” like cyberattacks that often come with a deluge of fake account creation on the network — usually by way of scripts and bots, with the goal of spreading as much spam and deceptive information as possible.

Because these factors occur with “variation,” Facebook says, the number of fake accounts on which the company takes action can vary from quarter to quarter.

Why Facebook Is Publishing This Information

In a statement penned by Facebook VP of Analytics Alex Schultz, the company’s reasons for making these numbers public is fairly simple: In transparency, there is accountability.

“Measurement done right helps organizations make smart decisions about the choices they face,” Schultz writes, “rather than simply relying on anecdote or intuition.”

And despite strong Q1 2018 earnings, as well as an enthusiastic response from the audience at F8, Facebook still continues to face a high degree of scrutiny. 

Tomorrow, for instance, brings yet another congressional hearing regarding the Cambridge Analytica scandal, where whistleblower Christopher Wylie is due to testify before the U.S. Senate Judiciary Committee.

This week, Facebook has issued a particularly high volume of statements and announcements about its growing efforts in the areas of transparency and user protections. The last time Facebook issued a high volume of this type of content was in the weeks leading up to CEO Mark Zuckerberg’s congressional hearings.

These latest announcements could indicate preparations for further hearings — some outside of the U.S.

Facebook — Zuckerberg, specifically — is also under mounting pressure from international authorities to testify on user privacy and the weaponization of its network to influence major elections.

European Parliament continues to press Zuckerberg to appear in a hearing (now, it’s willing to do so in a closed-door session, according to some reportsafter initial rumors of such a testimony surfaced in April. 

Additionally, members of U.K. Parliament have been particularly staunch about Zuckerberg appearing before them, after recent testimony from CTO Mike Schroepfer allegedly left several questions unanswered.

In an open letter to Facebook dated May 1, House of Commons Culture Committee chairman Damian Collins wrote that “the committee will resolve to issue a formal summons for [Zuckerberg] to appear when he is next in the UK.”

I have today written to @facebook requesting that Mark Zuckerberg appears in front of @CommonsCMS as part of our inquiry into fake news and disinformation. Read it here: https://t.co/jXZ5TjiZld pic.twitter.com/m0NU5Uyf2L

— Damian Collins (@DamianCollins)
May 1, 2018

Yesterday, Facebook’s U.K. Head of Public Policy Rebecca Stimson issued a written response to that letter, in which she outlined answers to the 39 questions that the Committee said were left unanswered by Schroepfer’s testimony. 

“It is disappointing that a company with the resources of Facebook chooses not to provide a sufficient level of detail and transparency on various points,” Collins responded today. “We expected both detail and data, and in a number of cases got excuses.”

Featured image credit: Facebook

Leave a Reply