You are here

Facebook removed 22.5m items of hate speech in just three months


Facebook took action against 22.5 million pieces of hate speech content during the first few months of the coronavirus pandemic, according to new data.

The social media giant’s latest Community Standards Enforcement Report, which covers the period from April to June 2020, shows that hate speech is still a major problem on the platform.

The amount of hate speech it took action on more than doubled versus the previous quarter, when it dealt with 9.6 million pieces of content. 

The firm’s efforts to tackle the issue were even clearer on Instagram, where it took action against 3.3 million items, compared to 808,900 in the January-March period. 

Facebook’s amplified restrictions on hate speech include expanding its bilingual automation technology and adding improvements to the English detection policy. 

It has also updated its hate speech guidelines, which it says will “more specifically account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world.”

While the figures look promising, Facebook’s actions on removing “organised” hate (carried out by groups and organisations) decreased from 4.7 million pieces of content in the previous quarter to 4 million in the April-June period.

QAnon conspiracy theorists, in particular, have thrived on the platform, with the tech giant removing more than 790 groups, 100 Pages and 1,500 ads tied to the far-right movement.

The crackdown follows a recent Guardian investigation, which stated that Facebook’s algorithm was promoting QAnon groups to users. 

A spokesperson for Facebook said the firm had “consistently” taken action against accounts, Groups, and Pages tied to QAnon that broke the rules.

Despite this, civil rights leaders warn that the company still allows too much “racist, hateful, and violent content” on its platform, something that Facebook addresses in its Commitment to Safety testimony. 

Terrorist content has also been under watch, with the platform taking action on 8.7 million pieces of content this quarter, in comparison to 6.3 million between January and March. 

Facebook says this was driven by “expanding our proactive detection technology to help us detect and review more potential violations.” Altogether, 99.6 per cent of terrorist content was removed before users reported it.

While the full effect of Facebook’s hate speech restrictions remains to be seen, the firm has announced a third-party auditor to back content moderation and add transparency to its reports.

Facebook isn’t the only company with a focus on combating hate speech this year, with Twitter recently banning “dehumanizing remarks” based on age, disability, and disease. Snapchat, meanwhile, stopped promoting Donald Trump’s account on it’s ‘discover’ page, citing his commentary on “racial violence and injustice” as the justification.

Parent Zone has a wealth of resources to help your child deal with hate speech, including Parent Guides that cover reporting to different apps and platforms, plus lots of information on developing Digital Resilience.

If you’re worried about radicalised content, you can get help on the Educate Against Hate website, or alternatively report material promoting extremism and terrorism through the government’s online reporting tool.

Image: chinnarach/


Health misinformation viewed 3.8 billion times on Facebook in the past year

What should kids know about internet safety?

Thinking critically and spotting fake news