Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Biden administration seeks input on AI safety measures

President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn’t decided if the government will have a role in doing the vetting

By Matt O'Brien
Tuesday 11 April 2023 09:33 EDT
Biden Administration Calls For Safety Measures On AI Tools

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

President Joe Biden's administration wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released, though it hasn't decided if the government will have a role in doing the vetting.

The U.S. Commerce Department on Tuesday said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems.

“There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

The NTIA, more of an adviser than a regulator, is seeking feedback about what policies could make commercial AI tools more accountable.

Biden last week said during a meeting with his council of science and technology advisers that tech companies must ensure their products are safe before releasing them to the public.

The Biden administration also last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems, but that was before the release of ChatGPT, from San Francisco startup OpenAI, and similar products from Microsoft and Google led to wider awareness of the capabilities of the latest AI tools that can generate human-like passages of text, as well as new images and video.

“These new language models, for example, are really powerful and they do have the potential to generate real harm,” Davidson said in an interview. “We think that these accountability mechanisms could truly help by providing greater trust in the innovation that’s happening.”

The NTIA's notice leans heavily on requesting comment about “self-regulatory” measures that the companies that build the technology would be likely to lead. That's a contrast to the European Union, where lawmakers this month are negotiating the passage of new laws that could set strict limits on AI tools depending on how high a risk they pose.

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in