AI pioneer warns UK is failing to protect against ‘existential threat’ of machines

Professor Stuart Russell says ‘stakes couldn’t be higher’ amid rise of super-intelligent machines.

Rob Freeman
Saturday 13 May 2023 09:08 EDT
Professor Stuart Russell warned a system similar to ChatGPT could form part of a super-intelligence machine which could not be controlled
Professor Stuart Russell warned a system similar to ChatGPT could form part of a super-intelligence machine which could not be controlled (PA Archive)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

One of the pioneers of artificial intelligence has warned the government is not safeguarding against the dangers posed by future super-intelligent machines.

Professor Stuart Russell told The Times ministers were favouring a light touch on the burgeoning AI industry, despite warnings from civil servants it could create an existential threat.

A former adviser to both Downing Street and the White House, Prof Russell is a co-author of the most widely used AI textbook and lectures on computer science at the University of California, Berkeley.

He told The Times a system similar to ChatGPT – which has passed exams and can compose prose – could form part of a super-intelligence machine which could not be controlled.

“How do you maintain power over entities more powerful than you – forever?” he asked. “If you don’t have an answer, then stop doing the research. It’s as simple as that.

“The stakes couldn’t be higher: if we don’t control our own civilisation, we have no say in whether we continue to exist.”

In March, he co-signed an open letter with Elon Musk and Apple co-founder Steve Wozniak warning of the “out-of-control race” going on at AI labs.

The letter warned the labs were developing “ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control”.

Prof Russell has worked for the UN on a system to monitor the nuclear test-ban treaty and was asked to work with the Government earlier this year.

“The Foreign Office … talked to a lot of people and they concluded that loss of control was a plausible and extremely high-significance outcome,” he said.

“And then the government came out with a regulatory approach that says: ‘Nothing to see here… we’ll welcome the AI industry as if we were talking about making cars or something like that’.”

He said making changes to the technical foundations of AI to add necessary safeguards would take “time that we may not have”.

“I think we got something wrong right at the beginning, where we were so enthralled by the notion of understanding and creating intelligence, we didn’t think about what that intelligence was going to be for,” he said.

We've sort of got the message and we're scrambling around trying to figure out what to do

Professor Stuart Russell

“Unless its only purpose is to be a benefit to humans, you are actually creating a competitor – and that would be obviously a stupid thing to do.

“We don’t want systems that imitate human behaviour… you’re basically training it to have human-like goals and to pursue those goals.

“You can only imagine how disastrous it would be to have really capable systems that were pursuing those kinds of goals.”

He said there were signs of politicians becoming aware of the risks.

“We’ve sort of got the message and we’re scrambling around trying to figure out what to do,” he said. “That’s what it feels like right now.”

The government has launched the AI Foundation Model Taskforce which it says will “lay the foundations for the safe use of foundation models across the economy and ensure the UK is at the forefront of this pivotal AI technology”.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in