Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Parents are putting more trust into ChatGPT than actual doctors, study finds

Researchers at the University of Kansas said the findings were ‘surprising’ and that they were concerned about incorrect information generated by the chatbot

Julia Musto
Thursday 31 October 2024 15:58 EDT
Comments
What Is ChatGPT?

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Parents are trusting ChatGPT for medical advice over actual doctors and nurses, a new study found.

Researchers at the University of Kansas also found that parents also say AI-generated text is credible, trustworthy and moral.

“When we began this research, it was right after ChatGPT first launched — we had concerns about how parents would use this new, easy method to gather health information for their children,” lead author and doctoral student Calissa Leslie-Miller said in a release. “Parents often turn to the internet for advice, so we wanted to understand what using ChatGPT would look like and what we should be worried about.”

To reach these conclusions, Leslie-Miller and her colleagues conducted a study with 116 parents who were between the ages of 18 and 65. The study was published earlier this month in the Journal of Pediatric Psychology.

The participants were given health-related text, reviewing content generated by healthcare professionals and the OpenAI chatbot ChatGPT. They were not told who, or what, authored the texts. They were asked to rate the texts based on five criteria - perceived morality, trustworthiness, expertise, accuracy and how likely they would be to rely on the information.

When comparing AI-generated text and that of health care experts, more than 115 parents told researchers at the University of Kansas that the ChatGPT text was more trustworthy
When comparing AI-generated text and that of health care experts, more than 115 parents told researchers at the University of Kansas that the ChatGPT text was more trustworthy (Getty Images/iStock)

In many cases, parents couldn’t tell which content was generated by ChatGPT or by the experts. When there were significant differences in ratings, ChatGPT was rated to be more trustworthy, accurate and reliable than the expert-generated content.

“This outcome was surprising to us, especially since the study took place early in ChatGPT’s availability,” said Leslie-Miller. “We’re starting to see that AI is being integrated in ways that may not be immediately obvious, and people may not even recognize when they’re reading AI-generated text versus expert content.”

ChatGPT was released in November 2022. On Thursday, OpenAI announced it had added a search engine into its chatbox, known as ChatGPT search. ChatGPT now has more than 250 million active monthly users.

In many cases, parents couldn’t tell which content was generated by ChatGPT or by the expert
In many cases, parents couldn’t tell which content was generated by ChatGPT or by the expert (REUTERS/Dado Ruvic/Illustration/File Photo)

Although ChatGPT performs well in many scenarios and could be a beneficial tool, the AI model is capable of producing information that is incorrect. It’s not an expert. Users need to proceed with caution.

“During the study, some early iterations of the AI output contained incorrect information,” Leslie-Miller said. “This is concerning because, as we know, AI tools like ChatGPT are prone to ‘hallucinations’ — errors that occur when the system lacks sufficient context.”

“In child health, where the consequences can be significant, it’s crucial that we address this issue,” she said. “We’re concerned that people may increasingly rely on AI for health advice without proper expert oversight.”

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in