Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

UK’s lack of AI regulation shows ‘complete misunderstanding’ and creates security threat, tech pioneer warns

Professor Stuart Russell said the UK’s approach to the tech’s regulation was a ‘complete misunderstanding’

Alexander Butler
Friday 12 January 2024 12:51 EST
Comments
UK approach to AI regulation is a ‘complete misunderstanding’, tech pioneer warns
UK approach to AI regulation is a ‘complete misunderstanding’, tech pioneer warns (Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

The UK’s approach to AI regulation shows a “complete misunderstanding” and could pose multiple threats to security, an AI pioneer warned.

Professor Stuart Russell said the government’s refusal to regulate artificial intelligence with tough legislation was a mistake - increasing the risk of fraud, disinformation and bioterrorism. It comes as Britain continues to resist creating a tougher regulatory regime due to fears legislation could slow growth - in stark contrast to the EU, US and China.

“There is a mantra of ‘regulation stifles innovation’ that companies have been whispering in the ear of ministers for decades,” Prof Russell told The Independent. “It’s a misunderstanding. It’s not true.”

Professor Stuart Russell said the UK’s refusal to introduce tough AI regulation was a misunderstanding
Professor Stuart Russell said the UK’s refusal to introduce tough AI regulation was a misunderstanding (Creative Commons)

“Regulated industry that provides safe and beneficial products and services - like aviation - promotes long-term innovation and growth,” he added.

The scientist has previously called for a “kill switch” - code written to detect if the technology is being ill-used - to be built into the software to save humanity from disaster.

Last year, the British-born expert who is now a professor of computer science at the University of California, Berkeley, said a global treaty to regulate AI was needed before software progresses to the point where it can no longer be controlled. He warned language learning models and deepfake technology could be used for fraud, disinformation and bioterrorism if left unchecked.

Despite the UK convening a global AI summit last year, Rishi Sunak’s government said it would refrain from creating specific AI legislation in the short term in favour of a light-touch regime.

The government is set to publish a series of tests that need to be met to pass new laws on artificial intelligence, reports suggest.

Rishi Sunak’s government said it would refrain from creating specific AI legislation in the short term in favour of a light-touch regime
Rishi Sunak’s government said it would refrain from creating specific AI legislation in the short term in favour of a light-touch regime (EPA)

Ministers will publish criteria in the coming weeks on the circumstances in which they would enact curbs on powerful AI models created by leading companies such as OpenAI and Google, according to the Financial Times.

The UK’s cautious approach to regulating the sector contrasts with moves around the world. The EU has agreed a wide-ranging AI Act that creates strict new obligations for leading AI companies making high-risk technologies.

By contrast, US President Joe Biden has issued an executive order to compel AI companies to reveal they are tackling threats to national security and consumer privacy. China has also provided detailed guidance on the development of AI emphasising the need to control content.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in