Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Video game technology helps paralysed woman speak, researchers say

The development opens up a way to restore natural communication for those who cannot talk.

Nina Massey
Wednesday 23 August 2023 11:54 EDT
Video game technology has helped a paralysed woman speak (Noah Berger/UC San Francisco)
Video game technology has helped a paralysed woman speak (Noah Berger/UC San Francisco)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Video game technology has helped a woman left paralysed after a stroke speak again, researchers report.

Edinburgh-based Speech Graphics, and American researchers at UC San Francisco (UCSF) and UC Berkeley, say they have created the world’s first brain-computer interface that electronically produces speech and facial expression from brain signals.

The development opens up a way to restore natural communication for those who cannot speak.

The experts explain that the same software that is used to drive facial animation in games such as The Last Of Us Part II and Hogwarts Legacy turns brain waves into a talking digital avatar.

Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others

Dr Edward Chang, UCSF

The research was able to decode the brain signals of the woman, Ann, into three forms of communication: text, synthetic voice, and facial animation on a digital avatar, including lip sync and emotional expressions.

According to the researchers, this represents the first time facial animation has been synthesised from brain signals.

The team was led by the chairman of neurological surgery at UCSF, Edward Chang, who has spent a decade working on brain-computer interfaces.

He said: “Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others.

“These advancements bring us much closer to making this a real solution for patients.”

A paper-thin rectangle of 253 electrodes was implanted onto the surface of the woman’s brain over areas that Dr Chang’s team has discovered are critical for speech.

The electrodes intercepted the brain signals that, if not for the stroke, would have gone to muscles in her tongue, jaw, voice box, and face.

A cable, plugged into a port fixed to the woman’s head, connected the electrodes to a bank of computers, allowing artificial intelligence (AI) algorithms to be trained over several weeks to recognise the brain activity associated with a vocabulary of more than 1,000 words.

Thanks to the AI, the woman could write text, as well as speak using a synthesised voice based on recordings of Ann speaking at her wedding, before she was paralysed.

The woman worked with the researchers for weeks so the AI could decode her brain activity into facial movements.

The researchers worked with Michael Berger, the CTO and co-founder of Speech Graphics.

The company’s AI-based facial animation technology simulates muscle contractions over time, including speech articulations and nonverbal activity.

In one approach, the team used the subject’s synthesised voice as input to the Speech Graphics system in place of her actual voice to drive the muscles.

The software then converted the muscle actions into 3D animation in a video game engine.

The result was a realistic avatar of the subject that accurately pronounced words in sync with the synthesised voice as a result of her efforts to communicate, researchers said.

However, in a second approach that is even more ground-breaking, the signals from the brain were meshed directly with the simulated muscles, allowing them to serve as a counterpart to the subject’s non-functioning muscles.

She could also cause the avatar to express specific emotions and move individual muscles, according to the study published in Nature.

Mr Berger said: “Creating a digital avatar that can speak, emote and articulate in real-time, connected directly to the subject’s brain, shows the potential for AI-driven faces well beyond video games.

“When we speak, it’s a complex combination of audio and visual cues that helps us express how we feel and what we have to say.

“Restoring voice alone is impressive, but facial communication is so intrinsic to being human, and it restores a sense of embodiment and control to the patient who has lost that.

“I hope that the work we’ve done in conjunction with Professor Chang can go on to help many more people.”

Kaylo Littlejohn, a graduate student working with Dr Chang, and Gopala Anumanchipalli, a professor of electrical engineering and computer sciences at UC Berkeley, said: “We’re making up for the connections between the brain and vocal tract that have been severed by the stroke.

“When the subject first used this system to speak and move the avatar’s face in tandem, I knew that this was going to be something that would have a real impact.”

In a separate study, researchers used a brain–computer interface (BCI) to enable a 68-year-old woman called Pat Bennett, who has ALS, also known as motor neurone disease (MND), to speak.

Although Ms Bennett’s brain can still formulate directions for generating phenomes – units of sound – the muscles involved in speech cannot carry out the commands.

Researchers implanted two tiny sensors in two separate regions of her brain and trained an artificial network to decode intended vocalisations.

With the help of the device she was able to communicate at an average rate of 62 words per minute, which is 3.4 times as fast as the previous record for a similar device.

It also moves closer to the speed of natural conversation, which is about 160 words per minute.

The computer interface achieved a 9.1% word error rate on a 50-word vocabulary, which is 2.7 times fewer errors than the previous state-of-the-art speech BCI from 2021, researchers said.

A 23.8% word error rate was achieved on a 125,000-word vocabulary.

Lead author Frank Willett, said: “This is a scientific proof of concept, not an actual device people can use in everyday life.

“But it’s a big advance toward restoring rapid communication to people with paralysis who can’t speak.”

Ms Bennett wrote: “Imagine how different conducting everyday activities like shopping, attending appointments, ordering food, going into a bank, talking on a phone, expressing love or appreciation — even arguing — will be when nonverbal people can communicate their thoughts in real time.”

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in