ChatGPT creators OpenAI form ‘Preparedness’ group to get ready for ‘catastrophe’
AI systems have the potential to ‘benefit all of humanity’ but also ‘pose increasingly severe risks’, company warns
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI, the creators of ChatGPT, have formed a new group to prepare for the “catastrophic risks” of artificial intelligence.
The “Preparedness” team will aim to “track, evaluate, forecast and protect against catastrophic risks”, the company said.
Those risks include artificial intelligence being used to craft powerful persuasive messages, to endanger cybersecurity and to be used in nuclear and other kinds of weapons. The team will also work against “autonomous replication and adaptation”, or ARA – the danger that an AI would gain the power to be able to copy and change itself.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI said. “But they also pose increasingly severe risks.”
Avoiding those dangerous situations will mean building frameworks to predict and then protect people against the dangerous capabilities of new artificial intelligence systems, OpenAI said. That will be one of the tasks of the new team.
At the same time, OpenAI launched a new “Preparedness Challenge”. That encourages people to think about “the most unique, while still being probable, potentially catastrophic misuse of the model” such as using it to shut down power grids, for instance.
Particularly good submissions of ideas for the malicious uses of artificial intelligence will win credits to use on OpenAI’s tools, and the company suggested that some of those people could be hired to the team. It will be led by Aleksander Madry, an AI expert from Massachusetts Institute of Technology, OpenAI said.
OpenAI revealed the new team as part of its contribution to the UK’s AI Safety Summit, which will happen next week. OpenAI was one of a range of companies that have made commitments on how it will ensure the safe use of artificial intelligence.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments