Human brain might subconsciously be able to detect deepfakes, study suggests
Learning how brain spots deepfakes could help create algorithms to flag them online, scientists say
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The human brain could have the ability to subconciously detect deepfakes, suggests a new study that may lead to the creation of tools for curbing the spread of disinformation.
Deepfakes are videos, images, audio, or text that appear to be authentic, but are computer-generated clones designed to mislead and sway public opinion.
Subjects tried to detect deepfakes and were assessed using electroencephalography (EEG) brain scans, said the study, published recently in the journal Vision Research.
The brains of these individuals could successfully detect deepfakes about 54 per cent of the time, said scientists, including those the University of Sydney in Australia.
However, when an earlier group was asked to verbally identify the same deepfakes, their success rate was only 37 per cent.
“Although the brain accuracy rate in this study is low – 54 per cent – it is statistically reliable. That tells us the brain can spot the difference between deepfakes and authentic images,” said study co-author Thomas Carlson from the University of Sydney.
There are now a growing number of deepfake videos online – from non-consensual explicit content to doctored media used in disinformation campaigns by foreign adversaries.
For instance, at the beginning of the Russian invasion of Ukraine, a deepfake video of president Volodymyr Zelensky urging his troops to surrender to Russian forces surfaced on social media.
With scientists across the world attempt to find new ways to identify deepfakes, researchers behind the new study said their findings could be a springboard in the fight against such doctored content online.
“If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter,” Dr Carlson said.
In the new study, participants were shown 50 images of real and computer-generated fake faces and asked them to identify which was which.
They showed a different group of participants the same images while their brain activity was recorded using EEGs, without them knowing half the images were fakes.
Comparing the two findings, scientists found people’s brains were better at detecting deepfakes than their eyes.
But scientists have cautioned that the findings are “just a starting point” and said further validation of the results is needed.
“More research must be done. What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected,” Dr Carlson said.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments