AI that simulates dead people risks ‘haunting’ relatives, scientists warn
‘Psychological effect could be devastating,’ Cambridge University ethicists say
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.AI simulations of dead people risk “unwanted digital hauntings”, researchers have warned.
A new study by ethicists at Cambridge University found that AI chatbots capable of simulating the personalities of people who have passed away – known as deadbots – should require safety protocols in order to protect surviving friends and relatives.
Some chatbot companies are already offering customers the option to simulate the language and personality traits of a deceased loved one using artificial intelligence.
Ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence say such ventures are “high risk” due to the psychological impact they can have on people.
“It is vital that digital afterlife services consider the rights and consent, not just of those they recreate, but those who will have to interact with the simulations,” said co-author Dr Tomasz Hollanek, from the Leverhulme Centre, said:
“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations of those they have lost. The potential psychological effect, particularly at an already difficult time, could be devastating.”
The findings were published in the journal Philosophy and Technology in a study titled ‘Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry’.
The study details how AI chatbot companies that claim to be able to bring back the dead could use the technology to spam family and friends with messages and adverts using the deceased person’s digital likeness.
Such an outcome would be the equivalent of being “stalked by the dead”, the researchers warned.
“Rapid advancements in generative AI mean that nearly anyone with internet access and some basic know-how can revive a deceased loved one,” said study co-author Dr Katarzyna Nowaczyk-Basinska.
“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.
“At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”
Recommendations from the study include safeguards around terminating deadbots, as well as improved transparency in how the technology is used.
Similar to the Black Mirror episode ‘Be Right Back’, chatbot users are already utilising the technology in an effort to emulate dead loved ones. In 2021, a man in Canada attempted to chat with his deceased fiancée using an AI tool called Project December, which he claimed emulated her personality.
“Intellectually, I know it’s not really Jessica,” Joshua Barbeau told The San Francisco Chronicle at the time. “But your emotions are not an intellectual thing.”
In 2022, New York-based artist Michelle Huang fed childhood journal entries into an AI language model in order to have a conversation with her past self.
Ms Huang told The Independent that it was like “reaching into the past and hacking the temporal paradox”, adding that it felt “very trippy”.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments