Facebook: We're better at policing nudity than hate speech


SAN FRANCISCO (AP) — Getting rid of racist, sexist and other hateful remarks on Facebook is more challenging than weeding out other types of unacceptable posts because computer programs still stumble over the nuances of human language, the company revealed today.

Facebook also released statistics that quantified how pervasive fake accounts have become on its influential service, despite a long-standing policy requiring people to set up accounts under their real-life identities.

From October to December alone, Facebook disabled nearly 1.3 billion accounts – and that doesn't even count all the times the company blocked bogus profiles before they could be set up.

Had the company not shut down all those fake accounts, its audience of monthly users would have swelled beyond its current 2.2 billion and probably created more potentially offensive material for Facebook to weed out.

Facebook's self-assessment showed its screening system is far better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda. Automated tools detected 86 percent to 99.5 percent of the violations Facebook identified in those categories.

For hate speech, Facebook's human reviewers and computer algorithms identified just 38 percent of the violations. The rest came after Facebook users flagged the offending content for review.

All told, Facebook took action on nearly 1.6 billion pieces of content during the six months ending in March, a tiny fraction of all the activity on its social network, according to the company.

The report marked Facebook's first breakdown on how much material it removes for violating its policies. It didn't disclose how long it takes Facebook to remove material violating its standards. The report also doesn't cover how much inappropriate content Facebook missed.