Google fires software engineer who claimed its AI had become sentient and self-aware

Andrew Griffin
Monday 25 July 2022 09:49 EDT
Comments
Google Abortion Clinics
Google Abortion Clinics (Copyright 2022 The Associated Press. All rights reserved.)

Your support helps us to tell the story

As your White House correspondent, I ask the tough questions and seek the answers that matter.

Your support enables me to be in the room, pressing for transparency and accountability. Without your contributions, we wouldn't have the resources to challenge those in power.

Your donation makes it possible for us to keep doing this important work, keeping you informed every step of the way to the November election

Head shot of Andrew Feinberg

Andrew Feinberg

White House Correspondent

Google has fired a software engineer who claimed its artificial intelligence had become self-aware and sentient.

Blake Lemoine was placed on leave at the company last month, after he said publicly that he believed Google’s LaMDa chatbot was a person.

Now Google said he had been permanently dismissed from the company, and claimed he had violated its policies. It also said that his claims about the chatbot were “wholly unfounded”.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” a Google spokesperson said in an email to Reuters.

Mr Lemoine had been insistent that the artificial intelligence system had gained personhood and was self-aware. He published a number of articles on the topic, including logs of his conversations with the chatbot.

He said that he had asked Google to give the chatbot a number of rights, and for it to be treated as a proper employee of the company. Mr Lemoine said his requests were being made on behalf of the chatbot.

AI experts had been largely sceptical of Mr Lemoine’s claims, denying that any of the public evidence suggested that the system was self-aware or should be treated as a person. Experts suggested that the system was instead just a very convincing chatbot, and had been trained using the internet to use language in similar ways as humans.

Google also denied the claims, and insisted that Mr Lemoine’s sharing of the conversations and other data was in breach of its confidentiality agreements.

Mr Lemoine did not comment on the dismissal. But on Twitter he pointed to an article he had published in June, claiming he could soon be fired for “doing AI ethics work”, and said that he had “totally called this”.

He had worked Google for seven years before he was placed on leave, as part of the company’s “Responsible AI” group.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in