Scientists reconstruct Pink Floyd song from recorded brain waves of patients
Neuroscientists decoded the 1979 hit Another Brick In The Wall, Part 1.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Scientists have reconstructed a Pink Floyd classic from the recorded brain waves of patients who were undergoing epilepsy surgery while listening to the song.
Researchers at University of California, Berkeley, in the US, used artificial intelligence (AI) techniques to decode the brain signals, recreating the 1979 hit Another Brick In The Wall, Part 1.
The team said this is the first time scientists have reconstructed a song from the recordings of brain activity.
They said the famous phrase “All in all it’s just another brick in the wall” is recognisable in the reconstructed song and the rhythms remain intact.
And although the words in the song are muddy, they are decipherable, the scientists said.
According to the team, the findings, reported in the journal PLOS Biology, show that brain signals can be translated to capture the musical elements of speech (prosody) – such as rhythm, stress, accent and intonation – which convey meaning that words alone cannot express.
While technology that can decode words for people who are unable to speak exists, the researchers said the sentences produced have a robotic quality – much like the way the late Stephen Hawking sounded when he used a speech-generating device.
The scientists believe their work could pave the way for new prosthetic devices that can help improve the perception of the rhythm and melody of speech.
Study author Robert Knight, a neurologist and UC Berkeley professor of psychology at the Helen Wills Neuroscience Institute, said: “It’s a wonderful result.
“One of the things for me about music is it has prosody and emotional content.
“As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who’s got ALS (amyotrophic lateral sclerosis – also known as motor neurone disease) or some other disabling neurological or developmental disorder compromising speech output.
“It gives you an ability to decode not only the linguistic content, but some of the prosodic content of speech, some of the affect.
“I think that’s what we’ve really begun to crack the code on.”
For the study, the researchers analysed brain activity recordings of 29 patients who underwent surgery a decade ago.
A total of 2,668 electrodes were used to record all the brain activity and 347 of them were specifically related to the music.
Analysis of song elements revealed a new region in the brain that represents rhythm – which, in this case, was the guitar rhythm.
The scientists also discovered that some portions of the auditory cortex – located just behind and above the ear – respond at the onset of a voice or a synthesiser, while other areas respond to sustained vocals.
The researchers found that language processing “is more left brain, while music is more distributed, with a bias toward right”.