Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

‘Trove of racial hatred’ exposed by investigation into online platforms

Twitter and Facebook have taken action to remove hate accounts amid calls for greater regulation.

Emily Pennink
Tuesday 11 January 2022 23:30 EST
Twitter and Facebook have taken action to remove hate accounts amid calls for greater regulation (PA)
Twitter and Facebook have taken action to remove hate accounts amid calls for greater regulation (PA) (PA Archive)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Tech giants have come under fresh fire after an investigation exposed hundreds of thousands of hate profiles online.

Research by the Centre for Analysis of the Radical Right (CARR) uncovered a “foul trove of racial hatred” on Twitter and Facebook and amongst the gaming community.

It has led to a call for greater regulation six months on from public outcry at the abuse of three black England footballers.

with regards to gaming, the abuse and offensive language found in text and voice chats is a significantly worse issue than the types of usernames highlighted here

Dr Edward Gillbard, researcher

Professor Matthew Feldman, director of CARR, told the PA news agency: “Finding a foul trove of racial hatred on social media is still shockingly easy.

“It makes you wonder what the point of moderation is when some of these obvious, overt and in some cases violence-inciting accounts can go literally years with no consequences, and certainly no moderation.

“This material is disgusting and makes it seem that platforms just don’t care enough to address this running sore.”

Some usernames – or ‘handles’ – made a flagrant and even proud mockery of Twitter’s terms of service, Prof Feldman added, saying: “What’s the point of claiming to provide moderation when this stuff is only a click away?

“Is Facebook really unable to moderate celebrations of the Holocaust because of the odd apostrophe?

We acknowledge and want to reiterate our commitment to ensuring that Twitter doesn’t become a forum that facilitates abuse and we continue to examine our own policy approaches and ways we can enforce our rules at speed and scale

Twitter

“It doesn’t matter if you have billions of users if the most vulnerable are subjected to this kind of abuse repeatedly, and seemingly without either protection or action.”

He said platforms had a “duty of care” to users but only Government regulation and the threat of tens of millions in fines would bring change.

“Otherwise, these platforms will stay reactive – badly – rather than proactive in taking down hateful extremism,” Prof Feldman said.

Last July, England footballers Marcus Rashford, Jadon Sancho and Bukayo Saka were targeted after missing penalties in the Euro 2020 final at Wembley.

Twitter removed more than 1,900 tweets but acknowledged it needed to do better.

Six months on, CARR researchers looked for profiles using simple words and phrases as indicators of “systemic failure” over two days in January.

They found around 300 users or profile names on Twitter derived from a racist phrase, including the N-word, dating as far back as 2009.

Dr Edward Gillbard, who carried out the research, said the majority had minimal interaction, with fewer than two followers and following fewer than two accounts.

He said: “It’s not clear whether this lack of interaction reduces the chance of the account being seen by ‘normal’ Twitter users and thus less likely to be reported.

“Either way, it would appear that there is no automatic moderation being performed by Twitter in terms of analysing existing accounts for offensive usernames containing (the N-word), and no moderation when it comes to initially setting your username or handle to contain the same term.”

Dr Bethan Johnson from CARR identified dozens of offensive Facebook profiles, including 83 variants of “hate (N-word)” and 91 on the Holocaust.

Others included the name Adolf Hitler and other high profile Nazis, as well as the names of mass killers such as the Christchurch mosque attacker in New Zealand.

By changing the spelling or inserting spaces and special characters, profiles appeared to fool moderation systems, she suggested.

Dr Johnson said the findings highlighted “significant room for improvement”.

She said: “In the case of Facebook, it may be that when users set up profiles with names that clearly mock and flout community standards — from ‘Jewkilla’ to ‘Nate Higgers’ — they are telling Facebook what kind of user they will be, what kind of ideas they bring to the platform, and the reality is that is far from community-orientated.”

An analysis of the digital gaming service Steam revealed more than 300,000 offensive profile names.

Of those, 241,729 were anti-black, 44,368 white supremacist, more than 28,000 neo-Nazi, 8,021 anti-Semitic, 5,607 homophobic, and 168 anti-Muslim.

More than 100 racist and far-right extremist profile names were identified on the game Fortnite and a further 34 on Rainbow Six Siege, 18 of which were active.

Although not exhaustive, Dr Gillbard, of the Web Science Institute at the University of Southampton, suggested the material was “just the tip of the iceberg”.

“In order to find the full scale of the issue, increased access and co-operation from these platforms and services is required.

“In addition, with regards to gaming, the abuse and offensive language found in text and voice chats is a significantly worse issue than the types of usernames highlighted here,” he said.

The story remains the same - social media companies profiting from the sale of our data but failing to properly protect people from harm.

Danny Stone, Antisemitism Policy Trust chief executive

Last month, a separate report estimated there were nearly half a million explicitly anti-Semitic tweets a year –  two for every Jew in the UK.

Danny Stone MBE, chief executive of the Antisemitism Policy Trust which jointly published the report, told PA: “Six months from the Euro finals, a year from the insurrection at the US capitol, but the story remains the same – social media companies profiting from the sale of our data but failing to properly protect people from harm.

“We have policies that are not properly enforced, racism at scale, targeted abuse.

“Big Tech lacks any urgency in preventing harm from being spread by its systems.

“I hope the forthcoming Online Safety Bill, and legislation across the world, will force social media companies to better look after their users because they appear to be in no hurry to help.”

If we find content that violates our policies, including the use of symbols, emojis or misspellings attempting to beat our systems, we will remove it

Facebook

A spokesman for Meta, the parent company of Facebook, Instagram and WhatsApp, said hate speech was not allowed on its platforms and the “violating” Facebook accounts were removed after being flagged.

He added: “If we find content that violates our policies, including the use of symbols, emojis or misspellings attempting to beat our systems, we will remove it.”

Twitter also said the accounts identified by CARR had now been “permanently suspended” for “violating our hateful conduct policy”.

A spokesman said: “We acknowledge and want to reiterate our commitment to ensuring that Twitter doesn’t become a forum that facilitates abuse and we continue to examine our own policy approaches and ways we can enforce our rules at speed and scale.”

A spokeswoman for Fortnite developer Epic Games said: “Many of these usernames are no longer in our systems and we have taken action against additional usernames provided.

“Usernames that include vulgarity, hate speech, offensive or derogatory language of any kind are in violation of our community rules.”

Epic Games have no control over console names but has passed details on to the companies concerned.

PA understands the Rainbow Six Siege profiles have been reset with randomised names and any offending pictures removed.

Users will receive a warning from the game’s creator Ubisoft as a first sanction and not be able to change their username for the next 30 days.

A spokesman for Ubisoft said the company “does not tolerate any form of bullying or harassment”.

The firm takes “concrete actions” to tackle “toxic” behaviour, and violations of its code of conduct could lead to sanctions, including bans, he said.

While automated processes were not “foolproof”, teams are constantly working on improving them, he added.

PA has contacted Steam for a response.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in