Google Duplex: Why people are so terrified by new human-sounding robot assistant
Its makers have rushed to say that the robot will identify itself to the people it speaks to
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.Google's "terrifying" new artificial intelligence feature has prompted concerns about the takeover of the robots and the abuse of AI.
Many commentators have suggested that "Duplex" is not only strange but entirely unethical, and that it could signal an important moment in the acceptance and use of artificial intelligence.
This week, Google announced a whole range of new AI features. But it was just one of them that took almost all of the attention: Duplex.
That feature allows the assistant to call people on its owners behalf, booking them appointments or checking on the availability of restaurants. And it does so while sounding like a real human and without identifying itself, meaning that anyone speaking to it will probably think it is a real human.
The outcry was immediate, and vociferous. Many said that such a feature was immoral if it did not make clear it was a robot, and that such features that tricked people into think they are real could cause problems in the future.
Many commentators pointed out that, as such, the feature had been explicitly constructed to be deceptive, and to trick the people who use it. Google said that the phone calls it played during the demonstration were "real" and spoke positively about how the feature was able to route its way around any problems that were thrown up "gracefully".
Now, Google has said that the bot will identify itself as such. In Google's initial demo during its I/O conference, the voice sounded entirely natural and obscured the fact it was powered by AI – referring to its owner only as its client.
"We understand and value the discussion around Google Duplex – as we've said from the beginning, transparency in the technology is important," Google said in a statement. "We are designing this feature with disclosure built-in, and we'll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product."
Google's statement came after and apparently in response to the outcry about the feature. It has received criticism from a range of ethicists and commentators, who argue that more thought needs to given to the ethical questions around the tool.
As well as the deception, Google appeared to imply that it might be possible to synthesise any person's voice into a personal assistant. During the conference, for instance, it showed how it had been able to take choice snippets of John Legend speaking and make them into an entire catalogue of his voice – something that could presumably done with the Duplex voice, too.
The feature is not yet available in any consumer product. But Google said it had been working on it for years and suggested it could be enabled soon.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments