Rise of AI images ‘reducing trust’ in what people see online, charity warns
Full Fact has called on the Government to boost funding for media literacy to help people spot manipulated content.
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.The rise of AI-generated images is eroding public trust in online information, a leading fact-checking group has warned.
Full Fact said the increase in misleading images circulating online – and being shared by thousands of people – highlights how many struggle to spot such pictures.
The organisation has expressed concerns about the adequacy of the new Online Safety Act in combatting harmful misinformation on the internet, including the growing amount of AI-generated content, and called on the Government to increase media literacy funding to teach the public to better identify fake content.
The campaign group points to a number of recent incidents, including fake mugshots of former US president Donald Trump and an image of Pope Francis wearing a puffer jacket, as clear instances where many users were fooled into sharing fake content and therefore misinformation.
Full Fact’s fact-checking work has also highlighted fake photographs of the Duke of Sussex and Prince of Wales together at the coronation, which it says were shared more than 2,000 times on Facebook, and an image of Prime Minister Rishi Sunak pulling a pint of beer, which was edited to look worse and viewed thousands of times on X, formerly Twitter.
The charity said it believes most of the influx of low-quality content flagged by fact-checkers is not necessarily intended to get people to believe an individual claim but to reduce trust in information generally.
It also says the large volume of fake or manipulated content could have an impact on the availability of good information online by flooding search results.
The recent, rapid evolution of AI apps means capable, AI-powered image generation or manipulation tools are now readily available online.
Chris Morris, Full Fact chief executive, said: “This year, we have seen repeated instances of fake AI images being shared and spreading rapidly online, with many people unsuspectingly being duped into sharing bad information.
“A great example is the viral AI-generated image of the Pope wearing a puffer jacket, which was shared by tens of thousands of people online before being debunked by fact-checkers and news outlets alike.
“It is unfair to expect the public to rely on news outlets or fact-checkers alone to tackle this growing problem.
“Anyone can now access AI imaging tools, and unless the Government ramps up its resourcing to improve media literacy, and addresses the fact that the Online Safety Act fails to cover many foreseeable harms from content generated with AI tools, the information environment will be more difficult for people to navigate.
“A lack of action risks reducing trust in what people see online. This risks weakening our democracy, especially during elections.”
A Government spokesperson said: “We recognise the threat digitally manipulated content can pose, which is why we have ensured the Act, among the first of its kind anywhere in the world, is future proofed for issues like this. Under our new law, platforms will be required to swiftly remove manipulated content when it is illegal or breaches their terms of service – including user-generated content using AI. Failure to comply with these duties under the Act will incur severe fines.
“The government is also investing to support projects developing media literacy skills, including several projects specifically designed to build resilience to false information.”
Subscribe to Independent Premium to bookmark this article
Want to bookmark your favourite articles and stories to read or reference later? Start your Independent Premium subscription today.