Taking away online anonymity won’t solve the problem of abuse

People have a right not to be abused online but we also have the right to remain anonymous. We shouldn’t need to give away one right to try and protect the other, write Ellen Judson and Joe Mulhall

Saturday 17 July 2021 09:45 EDT
Comments
Bukayo Saka (above), Jadon Sancho and Marcus Rashford received racist abuse online after the Euro 2020 final
Bukayo Saka (above), Jadon Sancho and Marcus Rashford received racist abuse online after the Euro 2020 final (PA Wire)

The dark side of social media is never far from the headlines. Moments after his penalty was saved, 19-year-old Bukayo Saka knew “instantly the kind of hate that I was about to receive”. One of England’s standout players in the tournament was buried in a wave of racist abuse, some of which appears to have come from England fans.

Saka was clear on where responsibility lay: “powerful platforms are not doing enough to stop these messages”.

When individuals face abuse online, the go-to solution is to unmask the abusers. It sounds simple: you wouldn’t say that to my face. A petition to make ID verification a requirement for social media accounts has rocketed to nearly 700,000 signatures, while a poll by YouGov found 78 per cent of people believed everybody should have to disclose their real identity online.

It’s encouraging to see people from across the political spectrum, the government, celebrities, the media and the public demanding change and refusing to accept that abuse is “just the cost” of being active online. But anonymity is a red herring.

It’s likely that restricting anonymity would not significantly increase our ability to tackle online abuse. While it no doubt emboldens some to act in worse ways, huge amounts of the hate and abuse online is done by named people, suggesting this is an issue of ideology and behaviour, not just accountability.

And it’s not the case that either Facebook has your passport details or you are completely untraceable online. Law enforcement and NGOs already can and do identify who is behind “anonymous” accounts which have broken the law.

For many people, anonymity online is not antithetical to safety – it is safety. Being able to find information and support, develop opinions and try out different identities without having to declare or prove your identity can be a crucial lifeline.

Anonymity allows people who are at risk – which could be related to their sexuality, gender, health, immigration status, or if they are experiencing abuse ​​– to access information, help and support while avoiding questions, persecution or violence. It is also an essential tool used by civil society – including Hope Not Hate when safely researching extremism – and is also used by investigative journalists and whistleblowers to speak truth to power.

Anonymity is not just important for people who are particularly vulnerable. It is a right we all have and should protect – particularly as data collected and shared about us online grows exponentially, and people face threats to privacy from governments, corporations, malicious individuals and criminals. As public discourse plays out online, we should be free to engage without having to hand our identification papers over to a major tech company in Silicon Valley or Beijing.

The problem with anonymous abuse is not that it’s anonymous – it’s that it’s abuse.

We can’t look to tech solutions to fix the problem of racism in society, but we should demand tech solutions to problems exacerbated by tech, such as the instantaneous abuse at scale that it facilitates. The news that the government will designate racist abuse a priority harm under the Online Safety Bill is welcome. But the focus is on post hoc takedown and user identification, rather than reducing the risk of people experiencing abuse in the first place.

Giving powers to users to control who can interact with them online is a first step, such as enabling people to turn off comments from anonymous accounts. But this is only a band-aid. It puts the onus on individuals to manage the problem, it doesn’t tackle non-anonymous abuse, and risks dividing online spaces into separate bubbles of “anonymous” and “verified” users – further marginalising people who already face barriers to engaging online.

And it won’t go far enough while algorithms promote divisive content and out-group animosity, rewarding hostility online with virality, while throwaway accounts can be created, deleted and re-created with no curation or oversight. The way that we interact online is shaped by the design choices of the platforms that facilitate it – which can be changed.

Algorithmic curation systems can be tested and updated to promote pro-social content. Greater friction can be introduced when posting, like restricting how brand new accounts can interact with public figures. Stable identities online can be encouraged by requiring anonymous or verified users to use an account over a period of time, build a network and reputation, engage with others and follow the rules in order to earn more freedoms on the platform.

We need to see more proactive and creative actions that don’t just rely on a mass collection of personal data. People have a right not to be abused online but we also have the right to remain anonymous. We shouldn’t give away one right to try and protect the other: we should find solutions to protect both.

Ellen Judson is a senior researcher at the Centre for the Analysis of Social Media (CASM) at the cross-party think tank Demos, and Joe Mulhall is head of research at Hope Not Hate

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in