Google takes AI image generator offline over racially diverse historical images

Andrew Griffin
Thursday 22 February 2024 11:39 EST
Comments
Google has said it is working to fix its new AI-powered image generation tool, after users claimed it was creating historically inaccurate images to over-correct long-standing racial bias problems within the technology (Tim Goode/PA)
Google has said it is working to fix its new AI-powered image generation tool, after users claimed it was creating historically inaccurate images to over-correct long-standing racial bias problems within the technology (Tim Goode/PA) (PA Wire)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Google has taken an image generator offline after it caused controversy by generating racially diverse historical pictures.

In recent days, Google’s Gemini AI system has been criticised for seemingly being programmed to generate diverse images – even when they are inappropriate. Users found that asking for images of Nazi soldiers or the US founding fathers would create pictures consisting mostly or solely of women and people of colour.

Google said that it was taking the system offline until it had been “improved”.

“We’re already working to address recent issues with Gemini’s image generation feature,” it said in a statement. “While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.”

It appears that the system had been built to reflect the diversity of the world – and that it was doing so even when prompted for historical images in which that kind of diversity is not appropriate.

Google had earlier acknowledged the issue, saying in a statement that Gemini’s AI image generation purposefully generates a wide range of people because the tool is used by people around the world and that should be reflected, but admitted the tool was “missing the mark here”.

“We’re working to improve these kinds of depictions immediately,” the company’s statement, posted to X, said.

“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Jack Krawczyk, senior director for Gemini experiences at Google, said in a post on X: “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

“As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.

“We will continue to do this for open ended prompts (images of a person walking a dog are universal!).

“Historical contexts have more nuance to them and we will further tune to accommodate that.”

He added that it was part of the “alignment process” of rolling out AI technology, and thanked users for their feedback.

Some critics have labelled the tool woke in response to the incident, while others have suggested Google has over-corrected in an effort to avoid repeating previous incidents involving artificial intelligence, racial bias and diversity.

In the past, artificial intelligence systems made by companies including Google have seemingly replicated the racist nature of the data that has been provided to it. In 2015, for instance, Google Photos was criticised for tagging black people as “gorillas”.

There have been several examples in recent years involving technology and bias, including facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

The incident comes as debate around the safety and influence of AI continues, with industry experts and safety groups warning AI-generated disinformation campaigns will likely be deployed to disrupt elections throughout 2024, as well as to sow division between people online.

Additional reporting by agencies

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in