New technology is forcing us to confront the ethics of bringing people back from the dead.
Originally published on Quartz.
Imagine you have a close friend you frequently communicate with via text. One day, they suddenly die. You reel, you cry, you attend their funeral. Then you decide to pick up your phone and send them a message, just like old times.
“I miss you,” you type. A little response bubble appears at the bottom of the screen. “I miss you too,” comes the reply. You keep texting back and forth. It’s just like they never left.
The possibility of digitally interacting with someone from beyond the grave is no longer the stuff of science fiction. The technology to create convincing digital surrogates of the dead is here, and it’s rapidly evolving, with researchers predicting its mainstream viability within a decade. But what about the ethics of bereavement—and the privacy of the deceased? Speaking with a loved one evokes a powerful emotional response. The ability to do so in the wake of their death will inevitably affect the human process of grieving in ways we’re only beginning to explore.
In the past year, neuroscientists and philosophers have been speculating about the potential of, let’s say, building a digital duplicate of your grandmother. This copy could exist in a kind of virtual Elysium, able to Skype in to Thanksgiving dinners long after her death. But Hossein Rahnama of Ryerson University and the MIT Media Lab is working on something more immediately realizable than mental duplicates: chatbots crafted from personal data.
“Fifty or 60 years from now, [millennials] will have reached a point in their lives where they each will have collected zettabytes [1 trillion gigabytes] of data, which is just what is needed to create a digital version of yourself,” Rahnama says.
Donning it “augmented eternity,” Rahnama’s AI program builds upon the digital archive a person has left behind: emails, texts, tweets, and even snapchats. He feeds these into artificial neural networks, which are like model brains that understand language patterns and process new information. Thanks to the neural network’s ability to “think” for itself, the person’s “digital being continues to evolve after the physical being has passed on.” In this way, an augmented-eternity bot would keep aware of current events, develop new opinions, and become an entity that is based on a real person rather than a facsimile of who they were at their time of death.
Rahnama’s augmented-eternity programs are still in development, but another researcher had developed a slightly different kind of working prototype. Eugenia Kuyda, co-founder of Russian AI start-up Luka, launched a program on their app last year that allows the public to engage with Roman Mazurenko, Kuyda’s best friend, who was killed in car accident in 2015. Kuyda’s aim was to use digital-afterlife technology to create a memorial in the form of a chatbot available to anyone interested in talking to Roman. But she had her reservations.
“I was worried: Would I get the tone right, would we be able to do something that will help remember a person, and won’t be in any way offensive to anyone that knew and loved Roman?” she says. “I was afraid to get it wrong, to make it not a beautiful memory for a friend but something creepy and strange.”
In life, Roman had an interest in technology’s ability to “disrupt death”. He was fascinated by the bizarre consequences of being “outlived” by the vast archive of digital information we create in this mortal coil. Kuyda therefore thought Roman was the perfect candidate for this experimental memorial, and went about creating the bot. Once complete, she was amazed and delighted to experience her friend’s wit once again. Romanbot expressed Roman’s insecurities, his poetic perspective, and his self-deprecating sense of humor. The bot was so convincing it even earned a seal of approval from Roman’s mother.
But while chatbots are good at imitating their progenitors’ patterns of speech, they’re not satisfying substitutes for real people. “It’s more like a shadow of a person,” Kuyda says. “At this point, it’s similar to us talking to god, or imagining we’re talking to someone we’ve lost, or even talking to a therapist.“
Fans of the sci-fi show Black Mirror may recognize a similar situation as the premise of a 2013 episode titled “Be Right Back.” In this story, a widow uses a service to collect her dead partner’s digital footprint (texts, emails, photos, audio recordings) to reconstitute him first into a chatbot able to exchange text messages with her, and then ultimately into a realistic android. The narrative suggests that attempts to preserve our loved ones in a digital afterlife will result in painful repercussions. It also raises the question of whether a service able to turn a dead person into a chatbot would be venturing into an ethical gray area, interfering with our ability to process the reality of death.
Andrea Warnick is a Toronto-based grief counselor and thanatologist who studies the scientific, psychological, and social aspects of death. She sees a potential therapeutic application for digital-afterlife technology—not necessarily in its ability to allow us to chat with lost loved ones, but by facilitating conversations about the dead within their network of bereaved friends and family.
“In modern society, many people are hesitant to talk about someone who has died for fear of upsetting those who are grieving—so perhaps the importance of continuing to share stories and advice from someone who has died is something that we humans can learn from chatbots,” she says.
Warnick says the common advice is people should “move on” after a death. But she feels Western society could benefit from a reminder that just because someone is dead doesn’t mean they’re gone. “However, given our society’s general discomfort with death and grief, I have concerns that they have the potential to be misused as well, possibly leading to situations in which people are further alienated in their grieving process,” Warnick adds.
The hope is that chatbots don’t undermine the importance of human connection and support for those who are grieving; that the vivid and often uncomfortable emotional labor of caring for the bereaved is not wholly outsourced to bots. After all, death may soon be the most apparent thing differentiating humans from advancing AI, and distancing ourselves from its stark reality doesn’t seem like a prescient way to improve our relationship with the meaning of life.
Privacy is also an issue relevant to digital afterlife programs. While Kuyda had faith that Mazurenko would give her Romanbot project his blessing, she also crafted it with far less than a zettabyte of data. This is the amount that Rahmana sees as being crucial for an all-knowing bot to be capable of being all-revealing, too. “We have to consider an individual’s privacy when it comes to passing on virtual profiles,” Rahmana says. “You should be able to own your data and only pass it along to people you trust, so allowing people to engage with their own ancestors would be likely.”
Even as digital afterlife technology advances to offer increasingly accurate simulacrums of our dead, their most significant quality may not be simulating what someone we love might say, but rather their ability to give the illusion of them listening to us instead. “It’s not about what we hear, it’s about what we say,” Kuyda says.
In this way, chatbots can provide the bereaved with a space to express thoughts and feelings about their loved ones both in private and within their communities. In time, this could help normalize conversations about death and the intensity of sorrow.
Talking to someone from beyond the grave may sound creepy. But it may offer some measure of comfort to your loved ones. It’s like the high-tech equivalent of putting together a scrapbook, or writing letters for your kids to open when you pass. Plus, it’s less frightening to think of death when you know you won’t vanish wholly into the void—but remain, in a sense, in the hearts and text conversations of the people you loved the most.