Facebook claims it is ‘incredibly proactive’ in taking down harmful content despite flurry of mass scandals

Many statements from Antigone Davis, the global head of safety at the company, stand in contrast to reporting based on Facebook’s internal documents

Adam Smith
Thursday 28 October 2021 12:09 EDT
Comments
The key takeaways from Facebook whistleblower Frances Haugen’s UK Parliament hearing

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Facebook has claimed that it is “incredibly proactive” in taking down content when providing evidence to Parliament today, amid the flurry of negative stories being uncovered due to whistleblowers.

Antigone Davis, the global head of safety at the company, claimed that Facebook was not only “responsive” with regards to taking down posts, but that it actively searched out problem content.

Ms Davis gave the answer when asked about Apple’s threat to take Facebook and Instagram off iPhones after it found human trafficking was organised on its apps. Facebook is also struggling to identify and remove trafficking cartels based in Mexico, including violent images and recruitment materials.

“Most of the things that are brought to our attention are managed within 48 hours”, Ms Davis claimed. “Our AI is not perfect, it’s something we’re always looking to improve.”

The committee, which is asking for evidence in developing online safety legislation, which would force social media companies to regulate “legal but harmful” content.

Ms Davis also said that Facebook has “no business incentive, no commercial incentive to actually provide people with a negative experience”, and said that “three million businesses in the UK use our platform to grow their business. If they aren’t safe, if they don’t feel safe, they aren’t going to use our platform”.

This statement stands in contrast to leaked audio from Mark Zuckerberg, who said that he expected advertisers to be on the platform “soon enough” and that he would not “change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue” after a boycott by advertisers because of the amount of racist content on Facebook.

With regards to Facebook’s algorithm, and the insurrection attempt on January 6, Ms Davis claimed that the company put in “serious measures to address those issues well before January 6”.

Those measures have, however, repeatedly being criticised for failing to fully deal with the extremist content that was available on the platform.

Reporting has for instance suggested that Facebook was alerted to a ‘Stop the Steal’ group on 3 November, the day of the US election, when it was “flagged for escalation because it contained high levels of hate and violence and incitement (VNI) in the comments.” Two days later, the group had grown to over 300,000 members.

Mr Zuckerberg, Facebook’s chief executive, later told Congress that the company “made our services inhospitable to those who might do harm”.

When asked whether Facebook changed the algorithm following the January 6 event, Ms Davis did not provide a clear answer.

Ms Davis was also asked why, when Facebook can identify harmful content, its algorithms continue to promote it. She answered that the company “tr[ies] to remote content that is divisive, for example, or polarising”.

In May 2020, it was reported that Facebook executives took the decision to end research that would make the social media site less polarising for fears that it would unfairly target right-wing users. Proposals to make the site less polarising were described as “antigrowth” and requiring “a moral stance”.

“Our algorithms exploit the human brain’s attraction to divisiveness,” a 2018 presentation warned.

Ms Davis also told MPs that Facebook was “committed to providing more transparency [and that it had] taken steps to do that”.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in