'Deathbot' values studied
People are increasingly turning to AI to have simulated discussions with the dead.
A ‘deathbot’ is a chatbot that imitates the conversational behaviour - in content, vocabulary and style - of a person who has died.
Based on generative AI systems that depend on a large collection of human-generated information, deathbots draw on text messages, voice messages, emails and social media posts to mimic the speech or writing of a deceased person.
The most common form of deathbot is based on text. However, deathbots with verbal inputs and audio outputs are becoming more common.
They draw on “digital remains”, generating responses to prompts entered by a human which can resemble the conversational responses the now-deceased person would have given.
In a new paper, experts explore the impact deathbots might have on the way grief is experienced and the ethical implications.
“From an optimistic perspective, deathbots can be understood as technological resources that can shape and regulate emotional experiences of grief,” says Dr Regina Fabry from Macquarie University’s Department of Philosophy.
“Researchers suggest that interactions with a deathbot might allow the bereaved to continue ‘habits of intimacy’ such as conversing, emotional regulation and spending time together.”
Some people do not want to be ‘zombified’ in the form of a deathbot after their death. But, she cautions, grief experiences are complex and variable.
“How we grieve, for how long we grieve, and which resources and practices can best support us as we navigate and negotiate loss depends on a range of factors,” Dr Fabry says.
“These include the cause of death (an accident, long-term illness, or homicide, for example); the kind and quality of the relationship between the bereaved and the person who has been lost; and the wider cultural practices and norms that shape the grieving process.”
Furthermore, the positive or negative impact of deathbots on grief also depends on the attitudes of the bereaved towards the conversational possibilities and limitations of deathbots.
“Is a bereaved person aware that they are chatting with a deathbot, one that will eventually commit errors? Or does a bereaved person, at least at times, feel as if they are, literally, conversing with the dead? Answering these questions needs more empirical research,” Dr Fabry said.
The paper points out that consent is a key challenge.
“Some people do not want to be ‘zombified’ in the form of a deathbot after their death. Others might express the wish during their lifetime that a deathbot be generated after their death. They might collect and curate data for that purpose,” says Dr Fabry.
“Either way, the bereaved - and tech companies offering deathbot services - would have a moral obligation to respect the wishes of the dead.”
Some researchers have pointed out, says Dr Fabry, the bereaved might face an autonomy problem and come to rely too much on a deathbot in their attempts to navigate and negotiate a world irrevocably altered by the death of a loved one.
There has been discussion, too, about whether human-deathbot interactions could see an irreversibly lost human relationship replaced by a digitally mediated relationship with an AI system, leading to self-deception or even delusion.
“To prevent the occurrence of this problem, we recommend the implementation of ‘automated guardrails’ to detect whether a bereaved person becomes overly dependent on their interactions with a deathbot,” the researcher says.
“Furthermore, we recommend that interactions with a deathbot should be supervised by a grief counsellor or therapist.”
Dr Fabry’s new paper is titled The Affective Scaffolding of Grief in the Digital Age: The Case of Deathbots.