Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Amazon bans police use of facial recognition technology over racial bias fears

Company hopes one-year moratorium gives Congress time to put 'stronger regulations' in place

Kate Ng
Thursday 11 June 2020 04:29 EDT
Comments
Amazon announced on 10 June that it will ban cops from using its facial recognition technology 'Amazon Rekognition' for one year, in wake of the ongoing protests against police abuse following the death of George Floyd in Minneapolis
Amazon announced on 10 June that it will ban cops from using its facial recognition technology 'Amazon Rekognition' for one year, in wake of the ongoing protests against police abuse following the death of George Floyd in Minneapolis (EPA)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Amazon has banned US law enforcement from using its facial recognition software for a year following concerns raised by civil rights advocates about potential racial bias.

The technology giant announced the one-year moratorium on police use of its Recognition software in a blog post, saying it hopes the time period will give uS lawmakers “enough time to implement appropriate rules”.

“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” said the company.

Organisations such as Thorn, the International Center for Missing and Exploited Children and the Marinus Analytics will still be allowed to use the software to identify and rescue victims of human trafficking and missing children.

Another tech giant, IBM, also said this week it would no longer offer its facial recognition software for “mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values”.

In a letter to Congress, IBM chief executive Arvind Krishna wrote: “Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.”

Studies have suggested facial recognition algorithms are much less accurate at identifying black and Asian faces compared to white faces.

Last year, the National Institute of Standards and Technology tested 189 algorithms from 99 developers and found that black and Asian faces were ten to 100 times more likely to be falsely identified by the algorithms compared to white faces.

Black women were even more likely to be misidentified when algorithms are required to match a particular photo to another of the same face in a database.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in