Facebook says levels of harmful content taken down remain ‘consistent’
Meta has published its latest community standards enforcement report detailing actioned content from across Facebook and Instagram.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The prevalence of harmful content on Facebook and Instagram remained “relatively consistent” during the first three months of this year, the platforms’ parent company Meta said, as it published its latest community standards enforcement report.
According to the figures, there was an increase in the amount of spam and violence and incitement content removed from Facebook, while a rise in the amount of drug content being removed from Instagram was reported.
But Meta said the prevalence of harmful content had decreased slightly in some areas, including bullying and harassment content, because of the improvements and enhancements to the company’s proactive detection technology.
The report said there had also been a slight increase in the prevalence of adult nudity and sexual activity content on Facebook compared with the last three months of 2021, which it says was due to “an increase in spam actors sharing large volumes of videos containing nudity that violated our policy”.
Meta said it removed more than 1.6 billion fake accounts during the first three months of 2022, a slight decrease on the 1.7 billion it removed in the final three months of 2021.
The amount of terrorism and organised hate content actioned on Facebook also increased compared with the previous quarter – with more than 2.5 million pieces of organised hate and 16.1 million pieces of terrorist content taken down in the first three months of this year.
“Over the years we’ve invested in building technology to improve how we can detect violating content,” Meta vice president of integrity, Guy Rosen, said.
“With this progress we’ve known that we’ll make mistakes, so it’s been equally important along the way to also invest in refining our policies, our enforcement and the tools we give to users.”
Mr Rosen also said the company was ready to refine policies as needed when new content regulations for the tech sector are introduced.
The UK’s Online Safety Bill is currently making its way through Parliament and would introduce strict new content rules around online harms for platforms such as Facebook and Instagram, while the EU is also working on its own regulation, with a similar approach expected in the United States in the future too.
“As new regulations continue to roll out around the globe, we are focused on the obligations they create for us,” Mr Rosen said.
“So we are adding and refining processes and oversight across many areas of our work. This will enable us to make continued progress on social issues while also meeting our regulatory obligations more effectively.”