AI experts warn against crime prediction algorithms, saying there are no 'physical features to criminality'

The experts say that such research could encourage a feedback loop of racially-biased data developing racially-biased algorithms

Adam Smith
Monday 29 June 2020 10:48 EDT
Comments
The Metropolitan Police commissioner said facial recognition was being used in a ‘proportionate, limited way that stores no biometric data’
The Metropolitan Police commissioner said facial recognition was being used in a ‘proportionate, limited way that stores no biometric data’ (PA)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

A number of AI researchers, data scientists, sociologists, and historians have written an open letter to end the publishing of research that claims artificial intelligence or facial recognition can predict whether a person is likely to be a criminal.

The letter, signed by over 1000 experts, argues that data generated by the criminal justice system cannot be used to “identify criminals” or predict behaviour.

Historical court and arrest data reflect the policies and practises of the criminal justice system and are therefore biased, the experts say.

“These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences,” the letter reads.

Moreover, by continuing these studies, “’criminality’ operates as a proxy for race due to racially discriminatory practices in law enforcement and criminal justice, research of this nature creates dangerous feedback loops” the letter says.

If the justice system is biased against black people then those biases will be present in the algorithms when they are trained, the experts argue.

Those algorithms will then feed back into the justice system, perpetuating a loop whereby the existential foundations of the algorithms are never questioned.

“Having a face that looks a certain way does not cause an individual to commit a crime — there simply is no 'physical features to criminality' function in nature,” it continues.

It also points out that this data can never be “fair”. The incentives to drive machine learning research and development are greater than those that “interrogate the cultural logics and implicit assumptions underlying their models.”

“At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, research which erases historical violence and manufactures fear through the so-called prediction of criminality," the researchers say.

Nevertheless, the researchers say that the publishing of such work “would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.”

The letter specifically references a publication called “A Deep Neural Network Model to Predict Criminality Using Image Processing.” Researchers claimed in a press release that the system was “capable of predicting whether someone is likely going to be a criminal ... with 80 percent accuracy and no racial bias.”

The release has now been deleted.

Springer, which would have published the paper, said that it was “submitted to a forthcoming conference for which Springer had planned to publish the proceedings. After a thorough peer review process the paper was rejected".

Facial recognition software has often been criticised for its inaccuracy and biases. Recently, Amazon's facial recognition technology incorrectly matched more than 100 photos of politicians in the UK and US to police mugshots.

The company has since placed a moratorium on its Rekognition software, while protests continue as a part of the Black Lives Matter movement.

IBM has also ended research into general facial recognition products.

It is not only facial recognition technology that has come under fire. Workers at both Microsoft and Google have asked their respective companies to end contracts with the police as part of a greater movement examining the relationship between large technology companies and law enforcement.

"We’re disappointed to know that Google is still selling to police forces, and advertises its connection with police forces as somehow progressive, and seeks more expansive sales rather than severing ties with police and joining the millions who want to defang and defund these institutions,” over 1600 Google employees wrote.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in