How Twitter became a far-right misinformation vehicle for the riots – and how it could be saved

Site has lost some key protections against the spread of misinformation – but there is hope that some of them could come back

Andrew Griffin
Tuesday 06 August 2024 12:15 EDT
Comments
Debunked: How did social media fuel the Southport rioters?

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

X, formerly known as Twitter, has been repeatedly accused of helping inflame tensions in the UK as the country is rocked by riots and violent unrest.

The site’s misinformation policies and technologies are in the spotlight as critics argue that it has allowed inflammatory and false posts to be boosted on its platform, and could help encourage further disorder.

Even before Elon Musk took over the site in late 2022, when it was still called Twitter, the company was often criticised for its failure to properly police misleading and provocative posts. In response, it launched a range of tools that were intended to take down problematic posts and boost those that shared good information.

Mr Musk has weakened or removed some of those protections in the time since. And experts say that the decision to do so has helped spread misinformation on a platform that was already not free of it.

But that is a reason to hope, too: there are features, some of which are still present on the site, that could be used to limit the spread of such content. Here are the tools that X has to moderate the posts on the site – and how they could be brought back.

Verification

One of Elon Musk’s first big moves on what was then Twitter was to remove the verification ticks that used to make clear that a person’s identity had been verified. Instead, he asked users to pay to get the blue tick.

The incentive for Mr Musk was obvious: getting people to pay up for the ticks generated more revenue, the blue tick had become a prestigious symbol that people would pay for, and that prestige led to some resentment among users against those who did have one.

But the effect was broader than that. It moved the site from having an identity-based verification system to one based on paying, which meant that there was no real way of knowing which accounts on the site were to be trusted.

What’s more, Mr Musk promised those paying for the verification tick that their posts would be boosted on Twitter – appearing higher up in the list of replies, for instance. That meant that people were able to pay both to look legitimate and have their posts seen by more users.

Anyone looking to spread misinformation could therefore by a service that would allow them to push it more readily. While that new paid-for blue tick was bought by many people who did not intend to use it to spread false narratives, it was a boon to those who wanted to.

Fact checking

When Mr Musk bought Twitter, it was early in its work on what was then known as “Birdwatch”. Just as he renamed Twitter, Mr Musk changed its name to “Community Notes” thought it serves much the same purpose.

That purpose is to crowdsource fact checking on people’s posts. Users are able to report a given tweet – either for being misleading or outright false, or just lacking in context – and propose a note that should be attached to it to give further information.

Other users are then able to vote on those annotations, and the winners will be shown alongside tweets. Usually, they give references to authoritative sources that make clear that a post is false or needs further information.

Elon Musk has continued to boost this feature, even as it has occasionally been applied on his own tweets. But some users complain that it is slow – and that even when an annotation is added, it is much harder to see than the original post.

Safety staff

Another of Mr Musk’s early moves was to fire many of Twitter’s staff. That meant that the company lost those who worked in its Trust and Safety team. Those worked in a variety of ways: engineers who found technical solutions to harassment, for instance, and staff who checked users’ reports of posts that violated its rules.

Those staff are no longer at X, and so those initiatives are gone. That means that users wait longer for their posts to be checked, and that they are less likely to be taken down.

With that has become a weakened enforcement of the rules. X’s terms do still rule out much of the behaviour that was previously banned on the site – including restrictions on posting incitement to hatred about protected groups – but the fact that there are fewer people to enforce it means that those posts are more likely to stay up for longer.

Regulation

The European Union has been particularly critical of Elon Musk even before this week. It has pointed to a range of problems with the site – from the misleading blue ticks to a lack of fact checking – and suggested that it could punish further failures with fines.

Mr Musk has reacted angrily to censure from the EU. “The DSA is misinformation,” he posted on X.

This could spread more broadly through what has been called the “Brussels effect”, or the way that new EU rules tend to echo out of Europe and change the landscape in much of the rest of the world. The UK has already passed some of its own online regulations, for instance, and similar powers could come into force in other parts of the world.

That could either force X to take on some of the recommendations of experts about stemming the spread of misinformation – or it could lead to yet more fines being imposed on the company for failing to do so.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in