Twitter will study ‘unintentional harms’ of its algorithm

Initiative aims to ensure “equity and fairness” of algorithm outcomes, says company

Vishwam Sankaran
Thursday 15 April 2021 08:39 EDT
Comments
Twitter logo displayed on laptop screen
Twitter logo displayed on laptop screen (AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Twitter has introduced a new company-wide move called the “Responsible Machinle Learning Initiative” to study whether its algorithms cause unintentional harm.

According to the microblogging site, the initiative seeks to ensure “equity and fairness of outcomes” when the platform uses machine learning to make its decisions, a move that comes as social media platforms continue to face criticism over racial and gender bias amplified by their algorithms.

The company said it also seeks to enable better transparency about the platform’s decisions and how it arrives at them, while providing better agency and choice of algorithms to its users.

Twitter noted that its machine learning algorithms can impact hundreds of millions of Tweets per day, adding that “sometimes, the way a system was designed to help could start to behave differently than was intended.”

It said the aim of the new initiative is to study these subtle changes and use the knowledge to build a better platform.

In the upcoming months, the company’s ML Ethics, Transparency and Accountability (META) team plans to study the gender and racial bias in its image cropping algorithm.

This comes after several users pointed out last year that photos cropped in people’s timelines appear to be automatically electing to display the faces of white people over people with darker skin pigmentation. 

The team is also slated to conduct an analysis of content recommendations for users from different political ideologies across seven countries.

Twitter said its researchers would also perform a fairness analysis of the Home timeline recommendations across racial subgroups.

“The META team works to study how our systems work and uses those findings to improve the experience people have on Twitter,” the company noted.

It added that its researchers are also building explainable ML solutions that can help users better understand the platform’s algorithms, what informs them, and how they impact the Twitter feed.

According to the microblogging platform, the findings from these studies may help in changing Twitter by helping remove problematic algorithms or help build new standards into its design policies when there is an outsized impact on particular communities.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in