Strange blood flow is the secret to detecting deepfakes, new research suggests

Development may also help researchers detect how deepfakes are made

Adam Smith
Friday 02 October 2020 07:16 EDT
Comments
(AFP via Getty Images)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Deepfake videos can be detected by measuring the blood circulation of the person speaking, research suggests.

A deepfake is a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.

Notable examples include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump.

These edited videos can be extremely difficult to detect, but researchers have suggested that examining how blood moves around the face could indicate what is real and what is fake, since deepfakes cannot replicate it with high enough fidelity.

“Biological signals hidden in portrait videos can be used as an implicit descriptor of authenticity, because they are neither spatially nor temporally preserved in fake content," the research, published in IEEE Transactions on Pattern Analysis and Machine Learning, states.

This means of examination is called photoplethysmography, or PPG, and is used to monitor newborn babies without having to attach anything to their bodies, because their skin is thinner than an adult’s.

“Synthetic content does not contain frames with stable PPG”, the scientists suggest, and were apparently able to tell whether a video was real or fake with over 90 per cent accuracy.

This technique can also be used to detect exactly where the deepfake came from, narrowed down to four different generators – DeepFakes, Face2Face, FaceSwap or NeuralTex.

Deepfakes have been called the most dangerous cybercrime of the future by experts, due to their difficulty to find, and the variety of crimes that the technology could be used for.

The fake videos could be used to discredit a public figure by pretending to be a friend or family member, for example.

Long term effects could be the public distrusting audio and video evidence as a whole.

At the moment, deepfakes are predominantly used for pornography. In June 2020, research indicated that 96 per cent of all deepfakes online are for pornographic context, and nearly 100 per cent of those cases are of women.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in