Kate photo: Why the controversy over editing and manipulation could be just the beginning
Concern over misleading images is not new – but it is now so easy to make them that people might not even know they are doing it
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.It was seemingly intended to end speculation. But the latest image released of the Princess of Wales has only led to more of it.
As soon as the picture was released, people began to notice inconsistencies: a sleeve that seemed to disappear, and blurring around the edges of clothes. Many suggested it had been edited – and many UK and international picture agencies became so concerned that they recalled the image, telling the world that they could not be sure it was real.
The day after it was released, a new statement attributed to Kate appeared in a tweet. “Like many amateur photographers, I do occasionally experiment with editing,” it read. “I wanted to express my apologies for any confusion the family photograph we shared yesterday caused. I hope everyone celebrating had a very happy Mother’s Day. C.”
The post made no mention of how the edits had actually been made – what changes had been made, or what software had been used to make them. While it has led to much speculation about artificial intelligence, there is nothing to indicate that it was or wasn’t used in the image.
But the suggestion that it was edited in the same way that “many amateur photographers” do could be a hint at the fact that changed images are becoming increasingly prevalent – and increasingly convincing. There is a long history of misleading images, but they have never been so easy to create as they are now.
Indeed, edited images are now so commonplace that the people taking them might not even realise that they are doing so. New phones and other cameras include technology that tries to improve pictures – but can also be changing them in unknown ways.
Google’s new Pixel phones, for instance, include a “Best Take” feature that is a key part of their marketing. It is an attempt to solve a problem that has plagued photos ever since people started using them to take portraits: in any given set of photos of a group of people, one of them is guaranteed to blinking, or looking away. Wouldn’t it be nice to be able to stick all the best bits together into one composite and improved image?
That’s what the Pixel does. People can take a burst of similar photos, and the phone will then gather them together and find the people’s faces. They can then be swapped around: the face of a blinking person can be substituted for another picture, and it will be seamlessly blended in.
Recently, too, users of newer Samsung phones noticed that their cameras appeared to be superimposing different moons onto pictures they had taken. Users found that if they pointed their camera towards a blurry picture of the Moon, new detail that had not actually been there appeared; it was only discovered after some Reddit investigation.
A controversy ensued, and Samsung admitted that its phones have a built-in “deep-learning-based AI detail enhancement engine”, which can spot the Moon and add more detail that wasn’t actually present when the image was taken. Samsung said it was built to “enhance the image details”, but some affected customers complained that they were being images of the Moon that they did not actually take.
It has become increasingly easy to change parts of a photo after they are taken, too. Adobe has introduced a tool called “generative fill” into Photoshop – users can select part of a photo, tell an AI what they would like it to be swapped for, and have that happen. A clashing sweater can be swapped for a more attractive one in a matter of seconds, for instance.
The numerous controversies led to some conversation about what a picture actually is. Photographs might never have been a simple matter of light hitting a sensor, but they have become a lot more complicated in recent years. The era of “computational photography” means that devices use their hardware to process images in ways that might make them more appealing but less accurate; readily available editing tools mean that precise changes to photographs are no longer confined to the darkroom.
Much of the recent conversation about image manipulation has focused on generative artificial intelligence, which makes it easy to edit images or create them entirely. But worries about fake images stretch back much longer – Photoshop, the software so prevalent that it came to be synonymous with misleading edits, was created in 1987, and the first fake image was created almost as soon as modern photography was invented.
The rise of AI has however led to new concern over how fake images could damage trust in any kind of picture – and fresh work to try and avoid that happening. That has included a new focus on spotting and removing misleading images from social networks, for instance.
The same technology companies that are building new tools that can edit images are also looking to find ways for people to spot them, too. Adobe has new tools called “Content Credentials” that allow users to highlight if an image has been edited and how; OpenAI, Google and others are exploring adding invisible watermarks to images so that people can check where they came from.
Some useful information is already hidden within pictures files. Today’s cameras include information in the files they create about what equipment was used to make them and when they were taken, for instance – though it is easy to remove it.
Traditional picture agencies have long had rules banning any kind of misleading or edited pictures. But they require those agencies to exercise some discretion: fixing the colours in an image is a central part of photographers’ work, for instance, and those agencies often distribute pictures from other sources that they cannot necessarily check, as happened with the picture of Kate.
The Associated Press, which was one of the first agencies to pull the image, says in its code of ethics for photojournalists that “AP pictures must always tell the truth”. “We do not alter or digitally manipulate the content of a photograph in any way”.
Those firm words are not necessarily as definitive as they sound. The AP does allow “minor adjustments in Photoshop”, such as cropping it or changing the colours. But the purpose of those is to “restore the authentic nature of the photograph”, it says.
Similarly, the AP’s code does actually allow images that “have been provided and altered by a source”. But it says that “the caption must clearly explain it”, and requires the transmission of such images to be approved by a “senior photo editor”.
The agency has similar rules about AI-generated images: they cannot be used to add or remove elements from a photo, and cannot be used if they are “suspected or proven to be false depictions of reality”. There was no indication that the picture of Kate had anything to do with AI, and neither the AP or other photo agencies mentioned the technology in their statement – but, however it was edited, it emerged into a world more attuned than ever to the ease and danger of misleading images.
Much of the work on these kind of standards has happened over the last year or so – since ChatGPT was released and kicked off new excitement about artificial intelligence. But it has led to new standards on misleading images, new thinking about pictures that might have been taken decades before, and a new concern about how simple it is to trick people. It may be easier than ever to create false images – but that might actually have made it much harder to get away with using them.