Instagram to warn users over ‘bullying’ language in captions
‘We should all consider the impact of our words,’ says anti-cyberbullying charity
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Instagram is to warn its users when they are using language in their captions that may be perceived as offensive or bullying.
The social media company said it will use artificial intelligence to spot language in captions that could be deemed potentially harmful.
A similar feature, which alerts users when the comments they’re leaving on other people’s posts contain possibly harmful language, was launched earlier this year.
When an Instagram user posts a caption that could be seen as bullying, a message will appear on their screen informing them that their caption looks similar to others that have previously been reported on the platform.
They are then given the option to edit the caption, learn more about why it has been flagged by the feature or to post it as it is.
Earlier this year, the head of Instagram Adam Mosseri published a statement outlining the Facebook-owned firm’s commitment to combatting cyberbullying.
In the statement, Mosseri said the social media platform is “rethinking the whole experience of Instagram” in order to address the issue.
“We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves,” he said.
“It’s our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem.”
Instagram has been criticised in the past for failing to take adequate measures to protect its users from online abuse.
In February, the social media company stated it was committed to removing all images related to self-harm on the platform.
Eight months later, Instagram announced plans to extend its ban on self-harm- and suicide-related images to drawings, cartoons and memes.
Dan Raisbeck, co-founder of anti-cyberbullying charity Cybersmile, said the firm’s latest feature is a good example of taking a proactive approach to preventing cyberbullying.
“We should all consider the impact of our words, especially online where comments can be easily misinterpreted,” he said.
“Tools like Instagram’s Comment and Caption Warning are a useful way to encourage that behaviour before something is posted, rather than relying on reactive action to remove a hurtful comment after it’s been seen by others.”
You can contact the National Bullying Helpline on 0845 22 55 787. The helpline is open from 9am to 5pm, Monday to Friday.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments