Google reveals new ‘robot constitution’ to try and stop robots from accidentally killing humans

Without restrictions, robots could misunderstand humans’ intentions and accidentally harm them

Andrew Griffin
Friday 05 January 2024 12:26 EST
Comments
(Google)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Google has written a “robot constitution” as one of a number of ways it is trying to limit the harm caused by robots.

One day, the company hopes that its Deepmind Robotics division will be able to create a personal helper robot that could respond to requests. It could be asked to tidy the house or cook a nice meal, for instance.

But such a seemingly simply request could actually be beyond the understanding of robots. What’s more, it might actually be dangerous: a robot might not know that it shouldn’t tidy the house so intensely that its owner gets harmed, for instance.

The company has now revealed a set of new advances that it hopes will make it easier to develop robots that are both able to help out with such tasks and to do so without causing any harm. The systems are intended to “help robots make decisions faster, and better understand and navigate their environments”, it said – and to do so safely.

The new breakthroughs include a new system called AutoRT, which uses artificial intelligence to understand the aims of humans. It does so using large models, including a large language model (LLM) of the kind used in ChatGPT, for instance.

It works by taking data from cameras on the robot and feeding it into a visual language model or VLM which can understand the environment and objects within it, describing them in words. That can then be passed to the LLM which will understand those words, generate a list of tasks that might be possible with them and then decide which of them should be done.

But Google also noted that actually integrating those robots into our daily lives would require people to be sure that they would behave safely. As such, the LLM that makes decisions within that AutoRT system has been given what Google refers to as a Robot Constitution.

That is a set of “safety-focused prompts to abide by when selecting tasks for the robots”, Google said.

“These rules are in part inspired by Isaac Asimov’s Three Laws of Robotics – first and foremost that a robot ‘may not injure a human being’,” Google wrote. “Further safety rules require that no robot attempts tasks involving humans, animals, sharp objects or electrical appliances.”

The system can then use those rules to guide its behaviour and avoid any dangerous activities, for instance, in the same way that ChatGPT might be told not to help people with illegal activities.

But Google also said that those large models could not be relied on entirely to be safe, even with those technologies. As such, Google still had to include more traditional safety systems borrowed from classical robotics, including a system that stopped it from applying too much force and a human supervisor that could physically switch them off.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in