‘The Game is Over’: Google’s DeepMind says it is on verge of achieving human-level AI

New Gato AI is ‘generalist agent’ that can carry out a huge range of complex tasks, from stacking blocks to writing poetry

Anthony Cuthbertson
Monday 23 May 2022 02:35 EDT
Comments
Google’s DeepMind has pioneered advances in artificial intelligence since its founding in 2010, with the ultimate goal of creating a human-level AI
Google’s DeepMind has pioneered advances in artificial intelligence since its founding in 2010, with the ultimate goal of creating a human-level AI (Alan Warburton / Better Images of AI / CC)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google’s DeepMind AI division.

Dr Nando de Freitas said “the game is over” in the decades-long quest to realise artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry.

Described as a “generalist agent”, DeepMind’s new Gato AI needs to just be scaled up in order to create an AI capable of rivalling human intelligence, Dr de Freitas said.

Responding to an opinion piece written in The Next Web that claimed “humans will never achieve AGI”, DeepMind’s research director wrote that it was his opinion that such an outcome is an inevitability.

“It’s all about scale now! The Game is Over!” he wrote on Twitter.

“It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline... Solving these challenges is what will deliver AGI.”

When asked by machine learning researcher Alex Dimakis how far he believed the Gato AI was from passing a real Turing test – a measure of computer intelligence that requires a human to be unable to distinguish a machine from another human – Dr de Freitas replied: “Far still.”

Leading AI researchers have warned that the advent of AGI could result in an existential catastrophe for humanity, with Oxford University Professor Nick Bostrom speculating that a “superintelligent” system that surpasses biological intelligence could see humans replaced as the dominant life form on Earth.

One of the main concerns with the arrival of an AGI system, capable of teaching itself and becoming exponentially smarter than humans, is that it would be impossible to switch off.

Fielding further questions from AI researchers on Twitter, Dr de Freitas said “safety is of paramount importance” when developing AGI.

“It’s probably the biggest challenge we face,” he wrote. “Everyone should be thinking about it. Lack of enough diversity also worries me a lot.”

Google, which acquired London-based DeepMind in 2014, is already working on a “big red button” to mitigate against the risks associated with an intelligence explosion.

In a 2016 paper titled ‘Safely Interruptible Agents’, DeepMind researchers outlined a framework for preventing advanced artificial intelligence from ignoring shut-down commands.

Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences,” the paper stated.

“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in