US election 2020: New tool detecting deepfakes created by Microsoft
It has also provided tools for creators so their work can more easily be authenticated
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Microsoft has announced a new tool that it has developed in order to combat the spread of deepfakes, ahead of the US presidential election in November.
A “deepfake” is a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not.
“Microsoft Video Authenticator” is able to analyse a still photo or a video, and give the viewer a rating on the likelihood that it has been altered.
For videos, Microsoft says its tool is able to provide still-by-still analysis, as such providing a percentage “in real-time” on each frame.
It detects the blending boundary of the deepfake, or fading or grayscale in the images that might not be visible to human eyes.
“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods,” the company said in a blog post.
“Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.”
Microsoft also said that it has developed new technology that can detect manipulated content and “assure people that the media they’re viewing is authentic”.
This is done through two components: the first is built into Microsoft Azure, the company's cloud computing service, which allows content makers to add digital hashes and certificates to their content.
These hashes can be used as metadata – information about the media, such as date and location. In combination with a reader for these hashes, which Microsoft says can be used in a browser extension, users can check that the certificates match the hashes and be informed whether the content is authentic.
The authenticator will initially only be available through a partnership Microsoft has made with the AI Foundation, an American artificial intelligence company.
The company is introducing a “Reality Defender 2020 (RD2020) initiative” that will be available to campaign organisations and news outlets.
A number of media companies, including the BBC, Radio-Canada, and the New York Times, will test the authentication technology.
Microsoft says that it hopes work with more technology companies, news publishers and social media companies over the next few months.
At their current stage, deepfakes are primarily used for pornography. In June 2020, research indicated that 96 per cent of all deepfakes online are for pornographic context
However, a report from University College London last month suggested that deepfakes are the most dangerous form of cybercrime.
This is because they are so difficult to detect and could be used for a variety of nefarious purposes, such as blackmail or fraud.
“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity”, said Dr Matthew Caldwell, who authored the research.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments