Twitter rolls out pre-tweet warning about entering ‘intense’ conversations
Heads Up feature aims to assess the ‘vibe’ of interactions
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Twitter will begin warning users about entering into “heated or intense” interactions on the platform.
Pre-tweet alerts will offer a “heads up” if a conversation contains sensitive or controversial subjects, while a pop-up will warn users to not break Twitter’s rules.
The pop-up also encourage people to communicate with respect, check the facts, and be open to diverse perspectives.
The new system is designed to better support healthy conversation, according to Twitter.
“Ever want to know the vibe of a conversation before you join in?” stated a post by the official account for Twitter Support.
“We’re testing prompts on Android and iOS that give you a heads up if the convo you’re about to enter could get heated or intense.”
Twitter has faced criticism for providing a platform for harassment and abuse, prompting a number of measures aimed at reducing the toxicity of interactions.
A new “Safety Mode” was tested earlier this year, which automatically blocks accounts for a period of seven days if they are suspected of being a troll.
“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier,” Twitter said.
Last year, anyone attempting to reply to a tweet with “harmful” language received a pop-up urging them to reconsider their choice of words.
A similar system was also launched by Facebook, with users receiving a “nudge” if they were about to post a comment containing offensive language.
Both social media firms rely on a combination of human and artificial intelligence algorithms to moderate the vast amounts of content posted online through their apps.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments