Regulation ‘critical’ to curb risk posed by AI, boss of ChatGPT tells Congress
You’re nervous about chatbots? So are we, says OpenAI chief Sam Altman
Artificial intelligence needs regulation and its possible use to interfere with election integrity is a “significant area of concern”, the boss of the company behind ChatGPT has told Congress.
OpenAI chief executive Sam Altman proposed an agency that would licence the most powerful AI systems to ensure safety standards.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” he told a Senate hearing on Tuesday.
Government intervention “will be critical to mitigate the risks of increasingly powerful” AI systems, he said.
His San Francisco-based start-up rocketed to public attention late last year after it released ChatGPT, the free chatbot tool that answers questions with convincingly human-like responses.
What started out as a panic among educators about ChatGPT’s use to cheat on homework assignments has expanded to broader concerns about the ability of AI tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.
Founded in 2015, OpenAI is also known for other AI products including the image-maker DALL-E. Microsoft has invested billions in the start-up and has integrated its technology into its own products, including its search engine Bing.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs and security,” said the panel’s ranking Republican, Josh Hawley. “This hearing marks a critical first step towards understanding what Congress should do.”
Mr Altman and other tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules.
“There’s no way to put this genie in the bottle. Globally, this is exploding,” said Senator Cory Booker.
Senator Mazie Hirono noted the danger of misinformation as the 2024 election nears. “In the election context, for example, I saw a picture of former President Trump being arrested by NYPD and that went viral,” she said, pressing Mr Altman on whether he would consider the faked image harmful.
Mr Altman responded that creators should make clear when an image is generated rather than factual. He also said companies should have the right to say they do not want their data used for AI training, which is one idea being discussed on Capitol Hill.
The White House has convened top technology CEOs including Mr Altman to address AI; politicians are seeking action to further the technology’s benefits and national security while limiting its misuse but consensus is far from certain.
Reuters and Associated Press contributed to this report.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments