TikTok’s ‘thermonuclear’ algorithm showed self-harm and eating disorder content to vulnerable users, report claims

A test group of users were recommended harmful content within seconds of logging into the platform

Alex Woodward
New York
Wednesday 14 December 2022 19:58 EST
Comments
(REUTERS)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A newly published report from a nonprofit organisation that studies online hate and disinformation found that vulnerable users on TikTok could be “bombarded” with content that could encourage self-harm and disordered eating.

The Center for Countering Digital Hate created fictional teenager accounts in four countries to test the app, including usernames that reference weight loss. Those users then watched and “liked” relevant videos to gauge the response of the platform’s powerful algorithm, which automatically generates recommended content on the app’s popular “for you” feed.

The experience for a vulnerable user is akin to “being stuck in a hall of distorted mirrors where you’re constantly being told you’re ugly, you’re not good enough,” according to the centre’s director and CEO Imran Ahmed, who spoke with reporters before the report’s publication on 14 December.

Among four fictional users who interacted with relevant videos, content related to mental health and body image was recommended every 39 seconds on average.

Content referencing suicide was recommended to one account within 2.6 minutes of the account logging into the app. Within eight minutes, one account was recommended content related to disordered eating.

Some videos referenced “junkorexia,” a slang term for people with anorexia who only eat junk food. Others were shown videos referencing suicide with images of razor blades. Videos on the app also attempted to evade moderation by using coded hashtags and other language, such as references to Ed Sheeran (such as Ed, as in “eating disorder”).

A statement from a TikTok spokesperson to The Independent disputed the findings, claiming that the activity and resulting experience in the report “does not reflect genuine behavior or viewing experiences of real people”.

“We regularly consult with health experts, remove violations of our policies, and provide access to supportive resources for anyone in need,” the statement said. “We’re mindful that triggering content is unique to each individual and remain focused on fostering a safe and comfortable space for everyone, including people who choose to share their recovery journeys or educate others on these important topics.”

The report follows changes on Meta’s platforms after a whistleblower revealed Instagram’s negative impacts on teen health, including disordered eating.

Both TikTok and the centre’s report address the challenges distinguishing between so-called “healthy” eating or fitness behaviours and those that could indicate disordered eating.

The report did not distinguish harmful or “positive” content within the broad categories of mental health and body image content.

“Within these categories, we have not distinguished content with a positive intent, for example educational or recovery content, from that with a clearer negative intent,” according to the report. “This is because researchers are not able to definitively determine the intent of a video in many cases, and because content with a positive intent can still be distressing and may cause harm.”

The company purports to remove content that violates its community guidelines, which prohibit content “depicting, promoting, normalising, or glorifying activities that could lead to suicide or self-harm.” The app also redirects users to support resources if they search for banned words or phrases such as “self harm,” according to TikTok.

Community guidelines also prohibit content that promotes “unhealthy eating behaviors or habits that are likely to cause adverse health outcomes”.

TikTok is “open about the fact that we won’t catch every instance of violative content, which is why we continue to invest at scale in our Trust and Safety operations,” according to the company.

The organisation also issued a raft of recommendations for TikTok, including implementing tools and a platform policy that centre user safety, as well as transparency and public accountability measures.

“Accountability is necessary because Big Tech has shown that it cannot be trusted to act when it thinks no one is watching, and self-regulation has palpably failed,” according to the report.

The company “publicly makes claims about the safety of its platform that are not supported by the evidence presented in this report,” according to the centre. “Any robust regulatory system needs accountability built in so that community standards, responsibilities and duties are upheld and problems are brought to light rather than concealed or minimized in corporate boardrooms.”

Mr Ahmed told reporters that social media platforms are in an “arms race” to develop stronger algorithms, going up against TikTok’s “thermonuclear” algorithm, presenting a “classic moment for regulators to step in”.

In the absence of a comprehensive statutory framework to regulate such platforms, companies like TikTok “should be liable under general principles of negligence and civil law – which is the case for other companies in other sectors,” according to the report.

“Financial penalties and the risk of litigation will help to ensure that companies like TikTok are embedding safety by design and doing all that they can to ensure that their platforms are safe and harmful content is not amplified to young and vulnerable users,” it says.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in