You are currently viewing ‘Deadbots’ could be the future of advertising, ethicists warn, and we’re not ready

‘Deadbots’ could be the future of advertising, ethicists warn, and we’re not ready

AI chatbots are coming to life around the world, and as conversations with a wide variety of robots become possible, several companies are offering consumers the chance to chat with a “simulation” of their deceased loved ones for as little as $10.

Some who are already familiar with the technology find comfort in text, voice or video simulations. They say it feels like their loved ones are really speaking to them from beyond the grave. Others find the AI’s immortalization of the deceased disturbing and manipulative.

Ethicists Tomasz Holanek and Katarzyna Nowaczyk-Basinski of the University of Cambridge are the latest to raise concerns about the risks of the “digital afterlife industry”.

They argue that chatbots that mimic deceased people—sometimes called deadbots, griefbots, or ghostbots—raise several key social and ethical questions that we have yet to confront.

For example, who owns a person’s data after they die? What is the psychological effect on survivors? What can deadbot be used for? And who can turn off the bot forever?

Such questions once inspired an eerie episode of the sci-fi series Black mirror. Now such an imagined future seems increasingly possible.

Consider the risks of the following potential scenario, which Hollanek and Nowaczyk-Basińska present in their recent research paper. A 28-year-old woman’s grandmother is passing by, so she decides to upload their text message and voice memo exchanges to an app. This app allows a woman to summon an AI simulation of her deceased grandmother whenever she wants. After a free trial, her digital grandma starts selling her stuff while talking to her.

“People can develop strong emotional attachments to such simulations, which will make them particularly vulnerable to manipulation,” Holanek suggests.

“Methods and even rituals should be considered to retire deadbots in a dignified manner. This could mean a form of digital burial…”

Such treatment of an AI chatbot may sound absurd at first, but in 2018 some ethicists reasoned that a person’s digital remains are valuable and should be treated as more than a form of profit, but as an “entity possessing intrinsic value.” .

This logic is consistent with the Code of Professional Ethics of the International Council of Museums, which mandates that human remains be treated with the necessary respect and “intact” human dignity.

Hollanek and Nowaczyk-Basińska don’t think a complete ban on deadbots is feasible, but they say companies should treat donor data “with reverence.”

They also agree with previous opinions that deadbots should never appear in public digital spaces like social media. The only exception should be for historical figures.

In 2022, ethicist Nora Freya Lindemann argued that dead robots should be classified as medical devices to ensure that mental health is a key priority of the technology. Young children, for example, may be confused by the physical loss of a loved one if they are still digitally “alive” and part of their daily lives.

But Hollanek and Nowaczyk-Basińska argue that this idea is “too narrow and too restrictive as it specifically applies to deadbots designed to help service-interactors process grief.”

Instead, they argue, these systems should be “meaningfully transparent” so that users know best what they’re signing up for and the possible risks.

There is also the question of who can disable the bot. If a person gives their “ghost bot” to their children, do the children have a right to opt out? Or is the dead robot around forever if the deceased wishes it? The wishes of the groups involved may not always agree. So who wins?

“Further handrails are needed to guide the development of leisure services,” Hollanek and Nowaczyk-Basińska conclude.

The Cambridge duo hope their arguments “will help center critical thinking about the ‘immortality’ of users in the design of human interaction with AI and the study of AI ethics”.

The research paper was published in Philosophy and Technology.

Leave a Reply