Brain scans linked to ChatGPT-like AI model found capable of revealing people’s thoughts

‘This is a real leap forward compared to what’s been done before’

Vishwam Sankaran
Tuesday 02 May 2023 02:07 EDT
Comments
Related video: ChatGPT horror: Why Musk, Pichai, Altman are so scared

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Scientists have developed a new artificial intelligence model that can read brain activity scans to read people’s minds – an advance that may help those unable to speak after a stroke.

Researchers, including those from The University of Texas at Austin in the US, say the new AI model is a “real leap forward” compared to what has been achieved before in helping those who are mentally conscious yet unable to physically speak.

In the latest study, published in the journal Nature Neuroscience on Monday, scientists found an AI system called a semantic decoder can translate a person’s brain activity as they listened to a story, or imagined telling a story, into text.

The new tool relies partly on models similar to the ones that power the now-famous AI chatbots – OpenAI’s ChatGPT and Google’s Bard – to convey “the gist” of people’s thoughts from analysing their brain activity.

But unlike many previous such attempts to read people’s minds, scientists said the system does not require subjects to have surgical implants, making the process noninvasive.

In the technique, people’s brain activity is first measured using an fMRI scanner after extensive training of the AI decoder.

During this process, individuals listen to hours of podcasts in the scanner.

Then, after participants are open to having their thoughts decoded, they listen to a new story or imagine telling a story which helps the AI generate corresponding text from brain activity alone.

“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” study co-author Alex Huth said in a statement.

“We’re getting the model to decode continuous language for extended periods of time with complicated ideas,” Dr Huth said.

While the output is not a word-for-word transcript, researchers said the model is designed to capture “the gist” of what is being said or thought – albeit not perfectly.

About half the time, the machine can produce text that closely – and sometimes precisely – matches the intended meanings of the original words.

Citing an example, they said in experiments, a participant listening to a speaker saying “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet”.

In another instance, when a participant was listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” it was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’”

Addressing questions about the potential misuse of the technology, such as by authoritative governments to spy on citizens, scientists noted that the AI worked only with cooperative participants who willingly participate in extensively training the decoder.

For individuals on whom the decoder had not been trained, they said the results were “unintelligible”.

“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them,” said Jerry Tang, another author of the study.

“A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” Dr Huth said.

Scientists also found unwilling participants can potentially defend against having their thoughts decoded.

They said tactics like thinking of animals or quietly imagining telling their own story, can let participants thwart the system.

Currently, the system is also not practical for use outside of the lab as it relies on an fMRI machine.

“As brain-computer interfaces should respect mental privacy, we tested whether successful decoding requires subject cooperation and found that subject cooperation is required both to train and to apply the decoder,” scientists concluded in the study.

However, they said that, as this AI technology develops in the future, there is a need to be proactive by enacting policies that protect people and their privacy.

“Regulating what these devices can be used for is also very important,” Dr Tang said.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in