Twitter drops its image-cropping algorithm after finding it excludes Black people and women
Algorithm was developed to enable users to crop images and improve consistency in the size of photos in their timelines
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Twitter is scrapping its image cropping algorithm after confirming the feedback raised by several users on the platform that the tool is biased against black people and women.
The “saliency algorithm,” introduced by Twitter in 2018, was developed to enable users to crop images and improve consistency in the size of photos in their timelines, so that they can see more Tweets at a glance, it noted in a blog post.
Trained using human eye-tracking data, the company said the machine learning (ML) algorithm works by assigning “saliency scores” to different points in an image and determining where a person might want to see first within a picture in order to crop an image to an easily-viewable size.
However, the recent analysis, conducted by researchers including Twitter’s ML Ethics, Transparency, and Accountability (META) team, found that when the algorithm crops images it has an 8 per cent bias in favor of women, and 4 per cent in favour of white people.
The results of the analysis also revealed a 7 per cent difference from demographic parity in favour of white women compared to black women, and a 2 per cent bias in favour of white individuals among men.
The researchers also tested the algorithm for objectification biases, also known as the “male gaze,” by analysing how the ML tool cropped 100 random images, each with a man or a woman and had more than one spot identified as salient.
It found that for every 100 images per group, about three were “cropped at a location other than the head.”
Some of these locations, the company said were non-physical aspects of the image, such as a “number on a sports jersey.”
According to Twitter, the algorithmic bias may have been due to several factors such as issues with image backgrounds and eye colour, but it added that none of these were an excuse.
Rumman Chowdhury, Director, Software Engineering at Twitter – a specialist in ethics and artificial intelligence who wrote the blog post – said the company has concluded after its review to leave the cropping decisions to users.
“One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people,” Chowdhury wrote in the post.
Twitter noted that since March it has been testing a new way of displaying standard aspect ratio photos in full that gives people more control over how their images appear– without its saliency algorithm crop.
“We’re working on further improvements to media on Twitter that builds on this initial effort, and we hope to roll it out to everyone soon,” the microblogging platform added.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments