Google defends bizarre answers from ‘AI Overviews’ feature
Users report being encouraged to eat rocks and glue, among other problematic responses
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google has defended its ‘AI Overviews’ feature, after users reported receiving bizarre responses.
The tool is intended to sit alongside traditional Google results, by using artificial intelligence to answer queries. The system takes data from the internet and uses that to craft responses, with Google claiming that it will make it easier for people to search.
But in recent days, Google users have reported that the system has encouraged them to eat rocks, make pizzas with glue, and has reiterated a false conspiracy theory that Barack Obama is Muslim.
Some of those responses appeared to have been taken from online results. The recommendation that a pizza topping would become more chewy by using glue, for instance, appears to have come from a joke posted on Reddit.
Now Google said those examples were for rare queries and claimed that the feature is working well overall.
“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” a spokesperson said. “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web.
“We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality. Where there have been violations of our policies, we’ve taken action – and we’re also using these isolated examples as we continue to refine our systems overall.”
The company said that it had added guardrails to its system with a view to stopping harmful content appearing, that it had subjected the system to an evaluation process and testing, that AI overviews were built to comply with its existing policies.
According to Google, it has also worked recently to improve the system to make it better at giving factual answers to responses.
The problems appeared to have come about in part because of the data that is used to inform the responses, which may include jokes or other content that becomes misleading when it is re-used in an answer. But part of the issue may also be the tendency to “hallucinate” by large language models such as those used by Google.
Because those large language models are trained using linguistic data, rather than facts, they have a tendency to give answers that may be worded convincingly but actually include falsehoods. Some experts have suggested that such problems are inherent in those systems.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments