You are currently viewing Cambridge experts warn: AI ‘Deadbots’ can digitally ‘haunt’ loved ones from beyond the grave

Cambridge experts warn: AI ‘Deadbots’ can digitally ‘haunt’ loved ones from beyond the grave

Cambridge researchers have warned of the psychological dangers of ‘deadbots’, AI that mimic deceased individuals, urging ethical standards and consent protocols to prevent abuse and ensure respectful interaction.

Artificial intelligence that allows users to have text and voice conversations with lost loved ones risks causing psychological harm and even digitally “haunting” those left behind without design safety standards, according to Cambridge University researchers.

“Deadbots” or “Griefbots” are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies already offer these services, providing an entirely new type of “presence after death.”

Artificial intelligence ethicists from Cambridge’s Leverhulme Center for the Future of Intelligence outline three platform design scenarios that could emerge as part of the emerging “digital afterlife industry” to show the potential consequences of careless design in an area of ​​artificial intelligence that they describe as ‘high risk’.”

Abuse of AI Chatbots

The study, published in Philosophy and Technologyhighlights the potential for companies to use deadbots to stealthily advertise products to consumers in the manner of a deceased loved one, or to torture children by insisting that a deceased parent is still “with you.”

When the living sign up to be virtually recreated after they die, the resulting chatbots can be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to a digital ‘stalking from the dead’ .”

Even those who are initially soothed by the “dead robot” may be exhausted by everyday interactions that become “tremendous emotional burdens,” the researchers say, but they may also be powerless to get an AI simulation to stop if their now-deceased loved one has signed a long-term contract with a digital afterlife service.

A preview of a fictional company called MaNana

A visualization of a fictional company called MaNana, one of the design scenarios used in the article to illustrate potential ethical issues in the emerging digital afterlife industry. Credit: Dr. Tomasz Holanek

“Rapid advances in generative AI mean that almost anyone with internet access and some basic know-how can bring a deceased loved one back to life,” said Dr Katarzyna Nowaczyk-Basinski, co-author of the study and a researcher at Cambridge’s Leverhulme Center for the Future of Intelligence (LCFI). “This area of ​​AI is an ethical minefield. It is important to prioritize the dignity of the deceased and ensure that this is not encroached upon by financial motives of digital life after death services, for example. At the same time, one can leave an AI simulation as a parting gift for loved ones who are not ready to process their grief in this way. The rights of both data donors and those interacting with AI afterlife services must be equally protected.

Existing services and hypothetical scenarios

There are already platforms offering AI-powered re-creation of the dead for a small fee, such as Project December, which started out using GPT models before developing its own systems and apps, including HereAfter. Similar services have started to appear in China as well. One of the potential scenarios in the new paper is “MaNana”: a conversational AI service that allows people to create a deadbot simulating their deceased grandmother without the consent of the “data donor” (the dead grandparent).

The hypothetical scenario sees an adult grandson, who is initially impressed and comforted by the technology, start receiving ads after the “premium trial” ends. For example, the chatbot offers an order from food delivery services in the voice and style of the deceased. The family felt they had neglected their grandmother’s memory and wanted the deadbot to be switched off, but in a meaningful way – something the service providers failed to consider.

Visualization of a fictional company called the parent

A preview of a fictional company called Paren’t. Credit: Dr. Tomasz Holanek

“People can develop strong emotional attachments to such simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Holanek, also of Cambridge’s LCFI. “Methods and even rituals should be considered to retire deadbots in a dignified manner. This could mean a form of digital burial, for example, or other types of ceremonies depending on the social context. We recommend design protocols that prevent deadbots from being used in disrespectful ways, such as for advertising or an active social media presence.

While Hollanek and Nowaczyk-Basińska say that designers of re-creation services should actively seek consent from data donors before switching, they argue that banning deadbots based on non-consenting donors would be impractical.

They suggest that design processes should include a series of prompts for those who wish to ‘resurrect’ their loved ones, such as ‘have you ever spoken to X about how they would like to be remembered?’, so that the dignity of the deceased can is at the forefront of deadbot development.

Age restrictions and transparency

Another scenario presented in the article, an imaginary company called “Paren’t,” highlights the example of a terminally ill woman leaving a deadbot to help her eight-year-old son through the grieving process.

While the deadbot initially helps as a therapeutic aid, the AI ​​begins to generate confusing responses as it adapts to the child’s needs, such as depicting an upcoming in-person meeting.

A preview of a fictional company called Stay

A preview of a fictional company called Stay. Credit: Dr. Tomasz Holanek

The researchers recommend age restrictions for deadbots and also call for “meaningful transparency” to ensure users are always aware they are interacting with AI. These could be similar to the current warnings about content that may cause seizures, for example.

The final scenario explored by the study – a fictional company called Stay – shows an elderly person who secretly commits to a dead-end job and pays for a twenty-year subscription in the hope that it will comfort their grown-up children and allow their grandchildren to know them.

After death, the service begins. An adult child does not engage and receives a barrage of emails in the voice of his dead parent. Another does, but ends up emotionally drained and guilt-ridden over the fate of the dead robot. Still, stopping a deadbot would violate the terms of the contract their parent signed with the service company.

“It is vital that digital afterlife services consider the rights and consent of not only those who recreate, but also those who will have to interact with the simulations,” Holanek said.

“These services risk causing enormous suffering to people if they are subjected to unwanted digital stalking by disturbingly accurate AI recreations of those they have lost.” The potential psychological effect, especially at an already difficult time, can be devastating.

The researchers urge design teams to prioritize opt-out protocols that allow potential users to end their relationships with deadbots in ways that provide emotional closure.

Nowaczyk-Basińska added: “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.”

Reference: “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” by Tomasz Holanek and Katarzyna Novaczyk-Basinska, 9 May 2024, Philosophy and Technology.
DOI: 10.1007/s13347-024-00744-w

Leave a Reply