ChatGPT cooks up fake sexual harassment scandal and names real law professor as accused

‘When first contacted, I found the accusation comical. After some reflection, it took on a more menacing meaning’

Vishwam Sankaran
Thursday 06 April 2023 02:15 EDT
Comments
Related video: Italy Bans OpenAI’s ChatGPT and Launches Data Protection Investigation

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

OpenAI’s chatbot ChatGPT falsely accused an American law professor by including him in a generated list of legal scholars who had sexually harassed someone, citing a non-existent The Washington Post report.

In an opinion piece published in USA Today, professor Jonathan Turley from George Washington University wrote that he was falsely accused by ChatGPT of assaulting students on a trip he “never took” while working at a school he “never taught at”.

“It is only the latest cautionary tale on how artificial ‘artificial intelligence’ can be,” he said on Monday, highlighting some of the accuracy and reliability issues with AI chatbots like ChatGPT.

As part of a study, a lawyer had reportedly asked ChatGPT to generate a list of legal scholars who had committed sexual harassment.

The AI chatbot returned a list that included Mr Turley’s name, falsely accusing him of making sexually suggestive comments and attempting to touch a student during a class trip to Alaska, citing a fabricated article in the Post that it said was from 2018.

The George Washington University professor noted no such article existed, something echoed by the newspaper as well.

“What is most striking is that this false accusation was not just generated by AI but ostensibly based on a Post article that never existed,” Mr Turley tweeted.

“When first contacted, I found the accusation comical. After some reflection, it took on a more menacing meaning,” he said.

In another instance, ChatGPT falsely claimed a mayor in Australia had been imprisoned for bribery.

Brian Hood, the mayor of Hepburn Shire, has also threatened to sue ChatGPT creator OpenAI over the false accusations.

He was falsely named guilty in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.

He did, however, work for the subsidiary, Reuters reported, citing lawyers representing Mr Hood.

The Australian mayor’s lawyers have reportedly sent a letter of concern to OpenAI, giving the company 28 days to fix the errors about Mr Hood or face a possible defamation lawsuit.

A spokesperson for Microsoft, which has reportedly invested $10bn in OpenAI and integrated it into its search engine Bing, was not immediately available for comment, said Reuters.

Several scholars in recent months have raised concerns that the chatbot’s use may disrupt academia primarily due to concerns over the accuracy of the content it generates.

University of Southern California AI expert Kate Crawford calls such falsely concocted stories by AI chatbots “hallucitations”.

ChatGPT gained prominence in December last year for its ability to respond to a range of queries with a human-like output.

Some experts have speculated that it may revolutionise entire industries and may even replace tools like Google’s search engine.

Researchers, including those at Harvard School of Medicine and Pennsylvania’s Wharton Business School, found that the chatbot could crack eligibility tests meant for students.

But others have also expressed cautious optimism.

The New York City education department said it was worried about the negative impacts of the chatbot on student learning, citing “concerns regarding the safety and accuracy of content”.

Recently, when Google began rolling out its ChatGPT rival Bard for some adults in the UK and US, it warned that the chatbot may share misinformation and could display biases.

The tech giant noted that the chatbot is “not yet fully capable of distinguishing between what is accurate and inaccurate information” as it predicts answers based on others it has learned from.

OpenAI spokesperson Niko Felix told The Washington Post in a statement that improving factual accuracy is a “significant focus” for the company, adding that it is “making progress”.

“When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers,” he said.

The Independent has reached out to OpenAI for a comment.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in