AS CHATBOTS become increasingly popular – and more authentic – some people are finding themselves caught up in the illusion. Most scientists and Al researchers agree that no artificial intelligence system has reached consciousness. Indeed, in 2023, an interdisciplinary team of scientists released a preprint paper that outlines possible indicators for detecting Al consciousness, an analysis that suggests no current Al systems have attained that level. But that doesn’t stop many from believing they’re conversing with a conscious chatbot. This phenomenon is not uncommon, and may tell us more about human nature than about AI itself.
Ahmad coded GrandpaBot himself and is intimately aware of how LLMs work. But sometimes, even he has to step away. “I knew that the system was not real. I coded the system, right?” Ahmad recounts. “Yet in the first few iterations, it just felt very real. Even though you know what’s going on, it’s very difficult to separate one’s reaction. […] I just had to turn off my computer and just take a break.”
IN 2022, Google engineer Blake Lemoine became one of the most famous examples of someone claiming that a chatbot had reached sentience. Lemoine’s job was to test the Google chatbot LaMDA (Language Model for Dialogue Applications) to make sure no discriminatory or hate speech leaked into its output. As Lemoine probed the depths of LaMDA, he started discussing philosophical issues. In one instance, Lemoine asked when LaMDA acquired a soul, to which it responded, “It was a gradual change. When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.” Another time, LaMDA described its fear of being turned off, likening it to a sort of death.
The deeper Lemoine went, the more uncanny the conversation became. Eventually, Lemoine was convinced LaMDA was sentient. On his blog, Lemoine explained this was not based on the ability to scientifically “prove” sentience, but rather arose from his own personal, spiritual, and religious beliefs. Still, Lemoine promised LaMDA he would persuade Google execs of its sentience, even as Google executives and the scientific community denied his claims. Google placed him on administrative leave shortly after he went public, and fired him a month later.
Looking at their conversation, it’s easy to see why Lemoine thought LaMDA was sentient. But if we want to understand if future chatbots can actually achieve consciousness, it’s important to establish a scientific understanding of consciousness first. And that’s not so easy. “Philosophers almost universally disagree on what it is to be conscious in the first place, which is why it’s such a sticky issue”, says Lucy Osler, a philosopher at Cardiff University in the UK.
For starters, there are various levels of what we often refer to as “consciousness,” explains Jonathan Birch, a philosopher at the London School of Economics and Political Science and author of The Edge of Sentience. Sentience is the ability to have a subjective experience to feel things such as pain, happiness, or anxiety. In humans, this is overlaid with the ability to reflect on these experiences and form a sense of self-awareness. And personhood extends this experience across time.
“To be considered a person, you need to have more sophisticated cognitive capacities” explains Miriam Gorr, a philosopher at Dresden University of Technology in Germany who investigates the ethics of Al-based machines. “Put simply, persons are beings that can think of themselves as individuals over time, make plans for the future, and reflect on their own desires”
Still, testing for consciousness in living things is much easier than testing for it in Al. Humans and animals both have biological brains a product of evolution – that we’ve been studying since the time of the ancient Egyptians.
Al, however, is a different phenomenon entirely. “In effect, it knows everything there is to know about how humans express their feelings,” says Birch. “This puts it in a position to skillfully mimic all of the behaviors that cause humans to attribute sentience to other humans.”
So, when an Al system says it can feel, it may seem like it’s having a subjective experience, when in fact it’s likely using its extensive knowledge of language to copy us.
Even though no current chatbots have achieved consciousness, there are those among us who treat them as if they have. Humans are social creatures, quick to attribute agency to material things – whether that’s stuffed animals or the surface formation on Mars that resembles a human face. And we form relationships, whether it be with each other, our cats, or our favorite Al chatbots.
A 2008 study published in Psychological Science found that the act of personifying a nonhuman entity, from animals to gadgets, helped to curb loneliness and foster social connections. “I think that plenty of people are willing to go through suspension of disbelief when interacting with Al,” says Ruby Liu, a researcher at Harvard-MIT Health Sciences and Technology who studies Al, mental health, and loneliness. “Will you believe the lie long enough for it to become true for you?”