Tom Cruise test shows people can’t detect fake videos even when they know they are fake

Exclusive: Difficulty of identifying deepfakes ‘threatens to lower the information value of video media entirely’, researchers warn

Adam Smith
Saturday 15 January 2022 05:51 EST
Comments
TikToker creates scarily realistic Tom Cruise deepfakes

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Most people are unable to tell they are watching a “deepfake” video even when they are informed that the content they are watching could have been digitally altered, research suggests.

The term “deepfake” refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.

Notable examples of it include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump – with some researchers suggesting illicit use of the technology could make it the most dangerous form of crime in the future.

In the first experiment, conducted by researchers from the University of Oxford, Brown University, and the Royal Society, one group of participants watched five unaltered videos, while another watched four unaltered videos and one deepfake – with viewers asked to detect which one is false.

The research was undertaken as part of a report by the Royal Society on how technology is changing online information, which will be released next week. It warns about the rise of misinformation and points to deepfakes as an area where further work is needed to limit the harms.

Scientists used videos of Tom Cruise created by VFX artist Chris Ume, which have seen the American actor performing magic tricks and telling jokes about Mikhail Gorbachev in videos uploaded to TikTok.

Participants who were issued the warning beforehand identified the deepfake in 20 per cent compared to 10 per cent who were not, but even with a direct warning more than 78 per cent of people could not distinguish the deepfake from authentic content.

“Individuals are no more likely to notice anything out of the ordinary when exposed to a deepfake video of neutral content,” the researchers wrote in a pre-release of the paper, “compared to a control group who viewed only authentic videos.” The paper is expected to be published, and peer reviewed, in a few months.

No matter the participants’ familiarity with Mr Cruise, gender, level of social media use, or their confidence in being able to detect altered video, they all exhibited the same errors.

The only characteristic which significantly correlates with the ability to detect a deepfake was age, the researchers found, with older participants better able to identify the deepfake.

“The difficulty of manually detecting real from fake videos (i.e., with the naked eye) threatens to lower the information value of video media entirely,” the researchers predict.

“As people internalise deepfakes’ capacity to deceive, they will rationally place less trust in all online videos, including authentic content.”

Should this continue in the future, people will have to rely on warning labels and content moderation on social media to ensure that deceptive videos and other misinformation does not become endemic on platforms.

“The question we were asking is, when you’re warned that a video might be a deepfake, is that enough for your average internet user to spot the signs for themselves? Our research suggests, for the majority of people, it is not,” lead author, Andrew Lewis, a doctoral researcher at the University of Oxford Centre for Experimental Social Science, told The Independent.

“This means we will have to rely on – and trust in – the moderation systems on platforms we all use.”

Facebook, Twitter, and other sites routinely rely on regular users flagging content to their moderators – a task which could prove difficult if people are unable to tell misinformation and authentic content apart.

Facebook in particular has been criticised repeatedly in the past for not providing enough support for its content moderators and failing to remove false content. Research at New York University and France’s Université Grenoble Alpes found that from August 2020 to January 2021, articles from known purveyors of misinformation received six times as many likes, shares, and interactions as legitimate news articles.

Facebook contended that such research does not show the full picture, as “engagement [with pages] should not … be confused with how many people actually see it on Facebook”.

The researchers also raised concerns that “such warnings may be written off as politically motivated or biased”, as demonstrated by the conspiracy theories surrounding the Covid-19 vaccine or Twitter’s labelling of former president Trump’s tweets.

The deepfake of President Obama calling then-President Trump a “total and complete dipshit” was believed to be accurate by 15 per cent of people in a study from 2020, despite the content itself being “highly improbable”.

A more general distrust of information online is a possible outcome of both deepfakes and content warnings, the researchers caution, and “policymakers should take [that] into account when assessing the costs and benefits of moderating online content”.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in