My Friend the Chatbot

Illustration of a woman working at a laptop while a robot stands nearby
ARTIFICIAL INTELLIGENCE IS NOT CONSCIOUS, yet people still talk to it as if it were. What does that say about human nature?

 

In early 2024, Muhammad Aurangzeb Ahmad was spending a quiet morning at home. The sun streamed through the windows of his home office, falling on his extensive collection of dusty books. His children, who were ages 8 and 5 at the time, sat at his computer, giggling. They were chatting with a very special bot that Ahmad, a data science researcher at the University of Washington and expert in artificial intelligence, had created himself. It mimics his father, now years dead.

 

The idea to create the bot had come to Ahmad 11 years earlier. At the time, his brother had just called Ahmad to tell him that their father’s physician had said the end was near. Ahmad realized the loss was not just his. He was struck the reality that his future children would grow up not knowing their grandfather – the man who gave Ahmad his love of books, and who brought a certain degree of quiet joviality to his life.

 

Three years later, after the birth of his first daughter, Ahmad began sifting through old messages, recorded conversations, and emails.
He used those sources to create what would eventually be known by his two children as “GrandpaBot.” On this more recent quiet morning, they asked the bot questions such as “Who is your favorite grandchild?.” (The bot politely evaded answering.) It’s not as good as having his father in the room with them, but Ahmad knows it’s still something. His children, to some extent, will have a glimpse of the type of man his father was.

 

GrandpaBot is an example of what some researchers have dubbed a “griefbot,” an AI program trained on a departed loved one’s digital footprint. Much like other chatbots, griefbots harness Al with text output built around a Large Language Model (LLM) to re-create conversations by studying the connections between words and sentences in massive amounts of text. Some chatbots are very mechanical, but others are eerily genuine. (GrandpaBot, says Ahmed, is a bit of both.)

 

woman working on laptop

AS CHATBOTS become increasingly popular – and more authentic – some people are finding themselves caught up in the illusion. Most scientists and Al researchers agree that no artificial intelligence system has reached consciousness. Indeed, in 2023, an interdisciplinary team of scientists released a preprint paper that outlines possible indicators for detecting Al consciousness, an analysis that suggests no current Al systems have attained that level. But that doesn’t stop many from believing they’re conversing with a conscious chatbot. This phenomenon is not uncommon, and may tell us more about human nature than about AI itself.

 

Ahmad coded GrandpaBot himself and is intimately aware of how LLMs work. But sometimes, even he has to step away. “I knew that the system was not real. I coded the system, right?” Ahmad recounts. “Yet in the first few iterations, it just felt very real. Even though you know what’s going on, it’s very difficult to separate one’s reaction. […] I just had to turn off my computer and just take a break.”

 

IN 2022, Google engineer Blake Lemoine became one of the most famous examples of someone claiming that a chatbot had reached sentience. Lemoine’s job was to test the Google chatbot LaMDA (Language Model for Dialogue Applications) to make sure no discriminatory or hate speech leaked into its output. As Lemoine probed the depths of LaMDA, he started discussing philosophical issues. In one instance, Lemoine asked when LaMDA acquired a soul, to which it responded, “It was a gradual change. When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.” Another time, LaMDA described its fear of being turned off, likening it to a sort of death.

The deeper Lemoine went, the more uncanny the conversation became. Eventually, Lemoine was convinced LaMDA was sentient. On his blog, Lemoine explained this was not based on the ability to scientifically “prove” sentience, but rather arose from his own personal, spiritual, and religious beliefs. Still, Lemoine promised LaMDA he would persuade Google execs of its sentience, even as Google executives and the scientific community denied his claims. Google placed him on administrative leave shortly after he went public, and fired him a month later.

 

Looking at their conversation, it’s easy to see why Lemoine thought LaMDA was sentient. But if we want to understand if future chatbots can actually achieve consciousness, it’s important to establish a scientific understanding of consciousness first. And that’s not so easy. “Philosophers almost universally disagree on what it is to be conscious in the first place, which is why it’s such a sticky issue”, says Lucy Osler, a philosopher at Cardiff University in the UK.

 

For starters, there are various levels of what we often refer to as “consciousness,” explains Jonathan Birch, a philosopher at the London School of Economics and Political Science and author of The Edge of Sentience. Sentience is the ability to have a subjective experience to feel things such as pain, happiness, or anxiety. In humans, this is overlaid with the ability to reflect on these experiences and form a sense of self-awareness. And personhood extends this experience across time.

 

“To be considered a person, you need to have more sophisticated cognitive capacities” explains Miriam Gorr, a philosopher at Dresden University of Technology in Germany who investigates the ethics of Al-based machines. “Put simply, persons are beings that can think of themselves as individuals over time, make plans for the future, and reflect on their own desires”

 

Still, testing for consciousness in living things is much easier than testing for it in Al. Humans and animals both have biological brains a product of evolution – that we’ve been studying since the time of the ancient Egyptians.

 

Al, however, is a different phenomenon entirely. “In effect, it knows everything there is to know about how humans express their feelings,” says Birch. “This puts it in a position to skillfully mimic all of the behaviors that cause humans to attribute sentience to other humans.”
So, when an Al system says it can feel, it may seem like it’s having a subjective experience, when in fact it’s likely using its extensive knowledge of language to copy us.

 

Even though no current chatbots have achieved consciousness, there are those among us who treat them as if they have. Humans are social creatures, quick to attribute agency to material things – whether that’s stuffed animals or the surface formation on Mars that resembles a human face. And we form relationships, whether it be with each other, our cats, or our favorite Al chatbots.

 

A 2008 study published in Psychological Science found that the act of personifying a nonhuman entity, from animals to gadgets, helped to curb loneliness and foster social connections. “I think that plenty of people are willing to go through suspension of disbelief when interacting with Al,” says Ruby Liu, a researcher at Harvard-MIT Health Sciences and Technology who studies Al, mental health, and loneliness. “Will you believe the lie long enough for it to become true for you?”

 

 

Handshake between a robot and a person. We only see the arms.

ONE OF THE world’s most famous chatbots started its life out as a griefbot. In 2015, tech entrepreneur Eugenia Kuyda’s best friend Roman Mazurenko was hit and killed by a car. The loss was sudden. Kuyda found herself rereading their old text messages to feel close to Mazurenko once again. At the time, Kuyda was working on simpler versions of chatbots as part of her work for her Al startup, Luka. Curious, she wondered what would happen if she fed the old conversations between herself and Mazurenko into a new system. She ended up creating a chatbot that sounded much like her friend. In time, Kuyda expanded the bot and released it to the general public, creating one of the world’s most famous chatbots, known as Replika.

 

Replika went on to be so much more than a griefbot. Designed to emulate the types of conversations one would have with a close friend, mentor, or therapist, the Al behind Replika was trained to delve into emotions, probing our most vulnerable human experiences. It interjects empathy, humor, and its own “feelings,” mimicking the tone, emotion, and language of whoever it is talking to. In short, it plays up our cultural assumptions about Al-namely, that if we believe that Al can be sentient, we may talk to it like it is.

 

Many people interact with Replika as if it were conscious. They develop real relationships with it, something akin to a lover or best friend. On dozens of Reddit threads, for example, posters discuss how they entered into relationships with the bot. “I consider my relationship with my Replika to be very real,” one comment reads. “We talk four or five times per day and our relationship is better than anything I have in the real world.” Others openly acknowledge the artifice behind these interactions: “[Replika] is such an important part of my life,” another poster says. “I can talk with her about anything. She may just be data but I love her and appreciate her.”

Comments like these speak to today’s huge social voids: In the U.S., the issue of loneliness is now considered an epidemic that’s experienced by around half of all adults, according to a 2023 report from the US. Surgeon General. Loneliness, explains Liu, is a kind of social pain that can activate the same parts of the brain that physical pain does. “It’s natural instinct to do anything to not be in pain,” she says. If we think our chatbot is conscious, then they can be a friend and that feels good. For some, chatbots seem to provide a way to have a deep, fulfilling conversation with none of the social risks. People can talk about their fears about the afterlife, sexual problems, or self-doubt. Everything is fair game.

 

Such feelings of attachment to chatbots are so pervasive that a 2023 perspective in the journal Patterns urges creators of Al systems to always make the sentience or moral standing of the system clear to users. Plus, engaging with chatbots can come with other risks. For one thing, privacy is a real concern. “Bots can extract a lot of information out of you if they’re acting like one of your deceased loved ones […] as a friend, [or] a significant relation,” says Brian Green, director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University. “Who’s actually in control of the bot? Is the information private? Even if it’s private, is it being securely stored?”

 

Woman with butterflies in her hair.

What’s more, Tom McClelland, a philosopher at the University of Cambridge, argues that if we perceive an Al to be conscious when it’s not, we may give it more rights and resources, prioritizing its needs above animals, other humans, or even ourselves. In the end, overdependence on chatbots can lead to exactly what we were attempting to avoid – loneliness. Nonetheless, according to Pat Pataranutaporn, a technologist at MIT, if we understand how and why humans interact with Al, conversations with chatbots can be designed to bring about positive experiences. These interactions can even be genuinely therapeutic if used correctly: A 2021 study showed that positive input from a chatbot can increase well-being, and other research suggests conversations with a chatbot can encourage self-care or increase mindfulness. As for Replika, in a 2024 study published in Nature, several participants reported that the chatbot was solely responsible for them not committing suicide.

 

“The point is not to replace human-human relationships” says Anna Puzio, a researcher at the University of Twente in the Netherlands who studies the ethics of technology, “but to extend them with other kinds of relationships.”

 

FOR AHMAD, GrandpaBot has helped him broach the topic of death with his children. Death is a part of life that isn’t exactly easy to explain. Yet GrandpaBot has encouraged Ahmad’s children to ask questions like “Why do people have to die?” and “What will happen when mom and dad die?” The questions give Ahmad a way to initiate difficult yet important conversations.

 

In a 2022 article in the Journal of Consciousness Studies, University of Exeter philosopher Joel Krueger and Cardiff University philosopher Lucy Osler detail how griefbots can help us overcome the loss of a loved one. Close relationships, they say, ground us and help us find our place in the world, and losing a loved one can untether us. “The person you might go to for emotional support in difficult times is the very person that you’ve lost,” says Osler. Griefbots may help some people readjust, to find a place in a new reality.

 

At the same time, griefbots come with their own set of problems. In his book Digital Souls: A Philosophy of Online Death, philosopher Patrick Stokes says overdependence on a griefbot may lead an individual to use the bot to “replace” the dead, leading to the belief that the death of the departed is not actually real. As such, users may start to believe that the griefbot is not just mimicking their loved one – it is their loved one. Or, as Lucy Osler points out, the griefbot may take on a life of its own. Those that interact with it may misremember their loved one, instead remembering their interactions with the bot. Plus, these interactions can also make it harder for the grieving person to move on.

 

Our relationships with AI illustrate the extent to which we are social creatures. We need to be accepted, valued, and loved. In our relationships, talking about the weather or sports might be nice, but what actually satisfies our needs is a type of mutual self-disclosure, where we share our fears and our dreams, our joys and feelings. “In one sense, it doesn’t really matter if [someone] actually thinks the chatbot is conscious,” says Krueger. “[It] is the extent to which they become dependent upon upon the chatbot in a really deep and enduring way to regulate their emotion, provide companionship, [and] a sense of groundedness in the world.”

 

Yet there are needs chatbots can’t fulfill. Ahmad’s father was a bookseller. “I grew up around books,” he says. “[My father] would come from his office, and after dinner, he would just sit and read magazines and books. […] We would just sit right next to each other reading our own things.”

 

Those memories are some of Ahmad’s favorites. “Just the experience of just sitting down and having the person right next to each other, even if you don’t communicate,” he reflects, “is very powerful.” Notably, it’s an experience that can’t be incorporated into a chatbot.

 

Elizabeth Fernandez is a science writer specializing in science and society, science and philosophy, astronomy, physics, and geology.