Brain-computer interface breakthrough sees thoughts translated into speech in scientific first

Neuroengineers from Columbia University harness intelligible speech from person's brain activity using artificial intelligence

Anthony Cuthbertson
Tuesday 29 January 2019 16:15 EST
Comments
Researchers translated thoughts directly into speech in scientific first
Researchers translated thoughts directly into speech in scientific first (iStock)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Clear, intelligible speech using computer processing of human brain activty has been acheived by scientists for the first time.

Researchers at the Zuckerman Institute at Columbia University were able to reconstruct the words a person heard by monitoring their brain activity.

The breakthrough is an important step towards creating a brain-computer interface capable of reading the thoughts of people who are unable to communicate verbally.

"Our voices help connect us to our friends, family and the world around us, which is why losing the power of one's voice due to injury or disease is so devastatting," said Professor Nima Mesgarani, a principal investigator at Columbia University who led the study.

He added: "We have a potential way to restore that power. We've shown that, with the right technology, these people's thoughts could be decoded and understood by any listener."

Prof Mesgarani and his team used artificial intelligence to recognise the patterns of activity that appear in someone's brain when they listen to someone speak.

By making use of a similar computer algorithm to those found in smart assistants like Amazon's Alexa and Apple's Siri, the neuroengineers were able to synthesise speech from these brain patterns using a robotic voice.

The algorithm, called a vocoder, was taught using epilepsy patients treated by Dr Ashesh Dinesh Mehta at the Northwell Health Physician Partners Neuroscience Institute.

"Working with Dr Mehta, we asked epilepsy patients already undergoing brain surgery to listen to sentences spoken by different people, while we measured patterns of brain activity. These neural patterns trained the vocoder," said Prof Mesgarani.

Testing the technology on individuals resulted in individuals understanding the translated thoughts around 75 per cent of the time.

"The sensitive vocoder and powerful neural networks represented the sounds the patients had originally listened to with surprising accuracy," said Prof Mesgarani.

Future breakthroughs that the technology could lead to include a wearable brain-computer interface that could translate an individual's thoughts, such as 'I need a glass of water', directly into synthesized speech or text.

"This would be a game changer," said Prof Mesgarani. "It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them."

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in