AI same risk as nuclear wars, experts warn
Artificial intelligence bosses say mitigating risk of extinction from AI should be ‘global priority’
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The heads of two of the leading AI firms have once again warned of the existential threat posed by advanced artificial intelligence.
DeepMind and OpenAI chief executives Demis Hassabis and Sam Altman pledged their support to a short statement published by the Centre for AI Safety, which claimed that regulators and lawmakers should take the “severe risks” more seriously.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement read.
The Centre for AI Safety, a San Francisco-based non-profit, was founded in 2022 with the aim of reducing “societal-scale risks from AI”, claiming that the use of artificial intelligence in warfare could be “extremely harmful” if used to develop new chemical weapons and enhance aerial combat.
Signatories of the latest statement, which did not clarify what they think may become extinct, also included business and academic leaders in the space.
Among them were Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”, and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.
The list also included dozens of senior bosses at companies like Google, the co-founder of Skype, and the founders of AI company Anthropic.
AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.
Experts say the technology could take over jobs from humans – but this statement warns of an even deeper concern.
The emergence of tools like ChatGPT and Dall-E have resurfaced fears that AI could one day wipe out humanity if it passes human intelligence.
Earlier this year, tech leaders called on leading AI firms to pause development of their systems for six months in order to work on ways to mitigate risks.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the open letter from the Future of Life Institute stated.
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.”
Additional reporting from agencies
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments