Google is not releasing its AI bot competitor to ChatGPT because of ‘risk’
Search giant has faced questions since rival chatty AI became vastly popular
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google is not releasing its own chatty AI bot because of fears about reputational risk, it has said.
The company is worried that the system will give answers that sound legitimate but could be significantly wrong, it said, according to a report of a meeting from CNBC.
In recent days, the ChatGPT system created by OpenAI has proven hugely popular. Its ability to create everything from fake TV scripts to programming code has become the basis for viral tweets and fears about the future of many industries.
The popularity of the system has led many to wonder whether Google would make its own system public, and whether it had missed a chance by doing so. That sam question was asked during an all-hands meeting at the company this week, CNBC reported.
But Alphabet chief executive Sundar Pichai and Google’s head of AI Jeff Dean said that it had to move “conservatively” because of its size and the “reputational risk” that the app could pose.
Google’s system is called LaMDA, which stands for Language Model for Dialogue Applications. It provoked a minor scandal earlier this year when a Google engineer claimed that it had become “sentient”, in a claim that was dismissed by most experts.
Google says that the technology built as part of the development of LaMDA is already used in its search offering. The system can spot when people may need personal help, for instance, and will direct them to organisations that can offer it.
But it will be staying primarily in those contexts for now, Google reportedly said, until it can be relied on more confidently.
“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date,” Mr Dean said. “But, it’s super important we get this right.”
The problems of such AI systems have been repeatedly detailed. They include fears about bias, especially when the system is trained with only limited knowledge, and the fact that it can be hard to know whether an answer is truly correct.
Many of those issues have already been seen in ChatGPT, despite the advanced technologies underpinning it. The system will often confidently and convincingly reply to questions with answers that are wildly wrong, for instance.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments