Meet Norman, the 'psychopath AI' that's here to teach us a lesson

A team of researchers from MIT trained the AI algorithm on the darkest corners of Reddit 

Anthony Cuthbertson
Saturday 09 June 2018 14:41 EDT
Comments
‘When people say AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data fed to it,’ say researchers
‘When people say AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data fed to it,’ say researchers (MIT)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The development of artificial intelligence, Stephen Hawking once warned, will be “either the best or the worst thing ever to happen to humanity”. A new AI algorithm exposed to the most macabre corners of the internet demonstrates how we could arrive at the darker version of the late physicist’s prophecy.

Researchers at the Massachusetts Institute of Technology (MIT) trained its ‘Norman’ AI – named after the lead character in Alfred Hitchcock’s 1960 film Psycho – on image captions taken from a community on Reddit that is notorious for sharing graphic depictions of death.

Once there, Norman was presented with a series of psychological tests in the form of Rorschach inkblots. The result, according to the researchers, was the “world’s first psychopath AI”. Where a standard AI saw “a black and white photo of a baseball glove”, Norman saw “man is murdered by machine gun in broad daylight”.

The idea of artificial intelligence gone awry is one of the oldest tropes of dystopian science fiction. But the emergence of advanced AI in recent years has led to scientists, entrepreneurs and academics increasingly warning of the legitimate threat posed by such technology.

Billionaire polymath Elon Musk – who founded the non-profit AI research company OpenAI – said in 2014 that AI is “potentially more dangerous than nukes”, while Hawking repeatedly warned of the dangers surrounding the development of artificial intelligence.

Less than six months before his death, the world-renowned physicist went as far as to claim that AI could replace humans altogether if its development is taken too far. “If people design computer viruses, someone will design AI that improves and replicates itself,” Hawking said in an interview last year. “This will be a new form of life that outperforms humans.”

Hawking warned that AI was ‘potentially more dangerous than nukes’
Hawking warned that AI was ‘potentially more dangerous than nukes’ (Getty)

But Norman wasn’t developed simply to play into fears of a rogue AI wiping out humanity. The way it was trained on a specific data set highlights one of the biggest issues that current AI algorithms are facing – the problem of bias.

Microsoft’s Tay chatbot is one of the best demonstrations of how an algorithm’s decisionmaking and worldview can be shaped by the information it has access to. The “playful” bot was released on Twitter in 2016, but within 24 hours it had turned into one of the internet’s ugliest experiments.

Tay’s early tweets of how “humans are super cool” soon descended into outbursts that included: “Hitler was right, I hate the jews.” This dramatic shift reflected the interactions Tay experienced with of a group of Twitter users intent on corrupting the chatbot and turning Microsoft’s AI demonstration into a public relations disaster.

AI bias can also have much deeper real-world implications, as discovered in a 2016 report that found a machine-learning algorithm used by a US court for risk assessment was wrongly labelling black prisoners as more likely to reoffend.

As the MIT researchers behind Norman note: “The data used to teach a machine-learning algorithm can significantly influence its behaviour. So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself but the biased data that was fed to it… [Norman] represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine-learning algorithms.”

It is hoped that even Norman’s deeply disturbed disposition can be softened through exposure to a broader range of inputs. Visitors to the website are encouraged to fill in their own responses to the Rorschach tests, with the researchers imploring: “Help Norman to fix himself.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in