Twitter experiments with asking users to ‘revise reply’ if they use bad language
App has often been criticised for not taking a stronger approach against sexist and racist users
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Twitter is testing a new feature to limit "harmful" language on the platform, asking users to reword tweets that include harmful language.
In a tweet, the site said: "When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful."
Twitter gave no indication of what language it considers ‘harmful’, so users are left to speculate over what words will or will not be acceptable – whether that’s simply foul language such as swearing or hateful speech such as sexist or racial slurs.
A Reuters report suggests that such language will be compared with other posts that have been reported.
Twitter has a hate speech policy which reprimands users who “promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”
However, the company has repeatedly been criticised for not taking enough action to protect its users, having been described as a “toxic place” especially for women and people of colour.
For the moment, Twitter’s “experiment” will only happen on the iOS version of its app. It is unclear how many users Twitter is testing the functionality on, or whether we can expect to see this change expanded to all 330 million of the social media site’s monthly active users. Twitter declined to comment.
“We're trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter's global head of site policy for trust and safety, told Reuters.
While there might be the potential for users to experiment with what words were or were not flagged – and thereby give malicious users an insight into what language Twitter finds offensive – Saligram said that the rollout was targeted at rule breakers who were not repeat offenders.
The test reportedly started on Tuesday and will continue for “a few weeks” globally, but only targetted on tweets in English.
This is not the only change that Twitter has been testing recently. The company has demonstrated a new war of showing quote-tweets under likes and retweets and a new way to read threads on iOS and in its web app.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments