In focus

Don’t believe your eyes: how technology is changing photography forever

As a photo of the Princess of Wales is pulled over manipulation claims, Andrew Griffin explains how new technology in Google phones, Photoshop and other products makes it easier than ever to edit pictures. So are we going to have to rethink what a photo is and can we ever truly believe what we see?

Monday 11 March 2024 07:35 EDT
Comments
The Princess of Wales has admitted to digitally altering a family photograph issued by Kensington Palace for Mother’s Day
The Princess of Wales has admitted to digitally altering a family photograph issued by Kensington Palace for Mother’s Day (Prince of Wales/Kensington Palace/PA Wire)

A seemingly innocuous picture released by the royal family to mark Mother’s Day, showing Kate Middleton smiling surrounded by her three children, has led to profound questions over how true photographs really are.

The photograph, taken by Prince William in Windsor, was pulled by major picture agencies over fears it had been edited.

However, this ability to manipulate photos is not new – and fake photos are practically as old as photography itself.

French photographer Hippolyte Bayard felt he had missed out on the recognition for inventing photography and had it taken from him by Louis Daguerre. In protest, he staged a picture of himself, Self Portrait as a Drowned Man, and wrote a note in which he said the man depicted had been “lying in the morgue for days, no one has recognised him or claimed him!”

He signed it himself: “HB. 18 October 1840”; the usually accepted beginning of photography is 1839. As soon as we knew what photographs were, we used them to trick people.

The tools of that trickery have developed with time, hand in hand with the form itself. Photoshop, for instance, was introduced in 1990 with the intention of helping display and edit photographs, but the programme’s name quickly became shorthand for a misleadingly edited image.

In recent years, however, new technologies have made it easier to fake images and put the power to do so in far more people’s hands. It is so readily available that people might not even know that their phones are editing their pictures.

That seemed to be happening earlier this year. People who had taken pictures of the Moon on their Samsung phones – using its AI-powered Space Zoom feature – suggested that those images were not of the actual Moon at all.

In one test, a Reddit user named u/ibreakphotos took a blurry picture of the Moon and then displayed it on a screen. They then pointed their phone at the display – and found that it had filled in new details, which were not there in the original image.

“It is adding detail where there is none (in this experiment, it was intentionally removed),” the user wrote. “The reality is, it’s AI doing most of the work, not the optics, the optics aren’t capable of resolving the detail that you see. Since the Moon is tidally locked to the Earth, it’s very easy to train your model on other moon images and just slap that texture when a moon-like thing is detected.”

In a response published on its website, Samsung said that it did use artificial intelligence but suggested that the Moon in photographs is actually the real lunar surface. The technology is able to use AI to recognise objects and then adjust the camera’s settings accordingly.

“When you’re taking a photo of the moon, your Galaxy device’s camera system will harness this deep learning-based AI technology, as well as multi-frame processing in order to further enhance details,” it said. That includes changing the brightness settings to ensure that the Moon is not blown out, for instance.

Nonetheless, the picture led to more philosophical questions: what actually is a picture of the Moon, anyway? And what is the point of a photograph?

Those questions became sharper in October year when Google released its Pixel 8 phone. That comes with a host of features, but much of the marketing has focused on pictures.

“Every moment, even better than you remember it,” one of those marketing lines on its website reads. As that tagline suggests, that means perfecting images in a way that is not necessarily true.

“Did someone blink or look away?” another part of its website asks, in support of its Best Take feature. That combines a host of images into one “fantastic picture”, as the site reads, swapping out someone’s distracted face for another, more focused one from another image.

Once again, artificial intelligence and other technologies are stepping in to help with the photo. This time around, however, the AI was not hidden; Google is specifically selling its phone as being able to create images that are better than what was really there.

That has long been the promise of some photo-sharing platforms, of course. In its early days, Instagram was best known for the filters that allowed people to adjust the look of an image; as picture-based social media took off, platforms such as Snapchat and TikTok developed those into smart filters that could adjust a person’s makeup or swap their head for someone else’s.

What has really changed, however, is the mingling of that technology with the camera itself. The photographer doesn’t edit their photos, and they might not even see the unedited version.

It might not always be in the dramatic ways that Google’s phone edits pictures. Recently Apple’s iPhone latest models have relied heavily on the phrase “computational photography” – which describes pictures where the photographs have been produced with the help of software and computer processors, as well as traditional camera hardware.

Taking a picture on an iPhone now means starting a process that involves thousands of complex, unseen calculations, Apple executives told The Independent when the iPhone 15 was launched in September. iPhones inevitably have small cameras, but they do have very powerful processors; recent models have used the latter to offset any problems with the former.

Or the editing might happen in entirely dramatic ways after the fact. New updates to Photoshop have brought the introduction of “generative fill”, where people can highlight a part of a picture and ask for it to be changed by AI: swap a tie onto the front of this man’s shirt, for instance, or turn the floor into lava. It happens in moments and with such precision that it might not even be recognised.

Taken together, it means that there is nothing to indicate that any particular photograph really reflects what happens: that the Moon really looked that sharp, that your friend wasn’t looking away in that picture. More subtly, the image might just look nicer than it really was; more substantially, the picture might include objects that were actually nowhere to be seen.

It is just one part of a broader question of what happens to the meaning of images when they can be generated in a moment by artificial intelligence. But it is also a question about what photographs are for: do they need to reflect the real moment, for instance, or are they intended to be a refracted recollection of it, more akin to a memory than evidence?

Hippolyte Bayard’s theatrical and misleading response to being scooped on the invention of photography didn’t put him off. In fact, he would go on to develop a new technique: combination printing, in which two separate negatives are taken of darker and brighter parts of the image and then combined to allow for pictures that capture the full range of light. Today’s phones do much the same, taking a range of images and then combining the details of them for the best end result.

A version of this article appeared in November 2023

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in