Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Share

But most of Facebook's removal efforts centered on spam and fake accounts promoting it.

In its first quarterly Community Standards Enforcement Report, Facebook said the overwhelming majority of moderation action was against spam posts and fake accounts: it took action on 837m pieces of spam, and shut down a further 583m fake accounts on the site in the three months.

Though Facebook extolled its forcefulness in removing content, the average user may not notice any change.

The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, though the company was clear to highlight that its new methods are evolving and aren't set in stone, CNET's Parker reports. But Facebook also moderated 2.5m pieces of hate speech, 1.9m pieces of terrorist propaganda, 3.4m pieces of graphic violence and 21m pieces of content featuring adult nudity and sexual activity.

Only 38 per cent of hate speech was flagged by the company's systems, with 2.5 million pieces of content removed.

The company estimated that for every 10,000 pieces of content seen on Facebook overall, between seven and nine of them violated its adult nudity and pornography standards.

For every 10,000 views of content on Facebook, the company said, roughly 8 of them were removed for featuring sex or nudity in the first quarter, up from 7 views at the end of a year ago.

Facebook is struggling to catch much of the hateful content posted on its platform because the computer algorithms it uses to track it down still require human assistance to judge context, the company said Tuesday. The rest came after Facebook users flagged the offending content for review. "This increase is mostly due to improvements in our detection technology", the report notes.

Last week, Alex Schultz, the company's vice president of growth, and Rosen walked reporters through exactly how the company measures violations and how it intends to deal with them.

The new report was released in conjunction with Facebook's latest Transparency Report, which said that across the world government requests for account data increased by four percent in the second half of 2017 compared to the first half. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.

This led to old as well as new content of this type being taken down.

Meanwhile, Facebook removed or added warning labels to about 3.5 million pieces of graphic violence content.

The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".

Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4. Tuesday's report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

Share