TikTok’s algorithm misreads creator’s pro-Black profile as a threat

The error occurred on TikTok’s beta Creator Marketplace, which links content makers with sponsors

Adam Smith
Friday 09 July 2021 12:03 EDT
Comments
(Getty Images)

Your support helps us to tell the story

This election is still a dead heat, according to most polls. In a fight with such wafer-thin margins, we need reporters on the ground talking to the people Trump and Harris are courting. Your support allows us to keep sending journalists to the story.

The Independent is trusted by 27 million Americans from across the entire political spectrum every month. Unlike many other quality news outlets, we choose not to lock you out of our reporting and analysis with paywalls. But quality journalism must still be paid for.

Help us keep bring these critical stories to light. Your support makes all the difference.

TikTok blocked users of its Creator Marketplace from using the word “black” and phrases like “Black Lives Matter” in their bios, as the algorithm flagged them as “inappropriate content”.

Creator Ziggi Tyler discovered the issue attempting to update his bio; the words “Black,” “Black Lives Matter,” “Black people," “Black success,” “Pro-Black,” and “I am a Black man” were not accepted. “Pro-white” and “supporting white supremacy” were accepted by TikTok’s algorithms without issue.

TikTok’s Creator Marketplace is currently in invite-only beta testing, but aims to connect creators with brands for sponsorship deals.

TikTok said that the app mistakenly flagged phrases because its hate speech detector associated the words “black” and “audience” – which contains the word “die”.

“Our TikTok Creator Marketplace protections, which flag phrases typically associated with hate speech, were erroneously set to flag phrases without respect to word order,” a TikTok spokesperson said in a statement.

“We recognize and apologize for how frustrating this was to experience, and our team has fixed this significant error. To be clear, Black Lives Matter does not violate our policies and currently has over 27B views on our platform."

The issue is the latest in a series of examples of automated systems working against minorities. Instagram’s CEO Adam Mosseri said in June 2020 that the company needed to better support the black community, and is looking into how its “policies, tools, and processes impact black people”, including its own algorithmic bias.

Algorithmic censorship also saw posts from Palestinians about violence in Gaza taken down on Facebook, Instagram, and Twitter, and led to criticism over the black-box nature of these systems.

Outside of social media other algorithms, including facial recognition algorithms, routinely fail to properly identify the faces of people of colour – who are already targeted disproportionately by police. In February 2019, Nijeer Parks spent 10 days in jail and paid $5000 (£3627) to defend himself after being misidentified by facial recognition software and subsequently arrested by police.

“Regardless of what the algorithm is and how it picked up, somebody had to program that algorithm,” Tyler told Recode. “And if [the problem] is the algorithm, and the marketplace has been available since [2020], why wasn’t this a conversation you had with your team, knowing there have been racial controversies?”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in