ChatGPT faces world’s first defamation lawsuit in Australia
Australian mayor says he may sue OpenAI over chatbot’s ‘false claims’
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.OpenAI is facing a lawsuit in Australia after a regional mayor accused ChatGPT of sharing false claims about him.
Victorian Mayor Brian Hood claims the AI chatbot was telling users that he had served time in prison as a result of a foreign bribery scandal.
He said he would sue OpenAI if the incorrect information was not removed, which would mark the first defamation lawsuit against an artificial intelligence chatbot
Mayor Hood became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a bribery case involving a subsidiary of the Reserve Bank of Australia in the early 2000s.
He did work for the subsidiary, Note Printing Australia, but was the person who notified authorities about payment of bribes to foreign officials to win currency printing contracts, and was never charged with a crime, lawyers representing him said.
The lawyers said they sent a letter of concern to ChatGPT owner OpenAI on 21 March, which gave OpenAI 28 days to fix the errors about their client or face a possible defamation lawsuit.
OpenAI, which is based in San Francisco, had not yet responded to Mayor Hood’s legal letter, the lawyers said. OpenAI did not respond to a request for comment.
A Microsoft spokesperson was not immediately available for comment.
“It would potentially be a landmark moment in the sense that it’s applying this defamation law to a new area of artificial intelligence and publication in the IT space,” said James Naughton, a partner at Mayor Hood’s law firm Gordon Legal.
“He’s an elected official, his reputation is central to his role... so it makes a difference to him if people in his community are accessing this material”.
Australian defamation damages payouts are generally capped around A$400,000 ($214,000). Mayor Hood did not know the exact number of people who had accessed the false information about him – a determinant of the payout size – but the nature of the defamatory statements was serious enough that he may claim more than A$200,000, Mr Naughton said.
If the lawsuit is filed, it would accuse ChatGPT of giving users a false sense of accuracy by failing to include footnotes, Mr Naughton said.
“It’s very difficult for somebody to look behind that to say ‘how does the algorithm come up with that answer?’” he said. “It’s very opaque.”
OpenAI addressed the issue of misinformation in a detailed blog post published on Wednesday.
“Today’s large language models predict the next series of words based on patterns they have previously seen, including the text input the user provides,” the company wrote.
“In some cases, the next most likely words may not be factually accurate. Improving factual accuracy is a significant focus for OpenAI and many other AI developers, and we’re making progress.
“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate. However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and to educate the public on the current limitations of these AI tools.”
Additional reporting from agencies.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments