Now Reading
LaMDA Is Nothing Like a Person. This is Why.

LaMDA Is Nothing Like a Person. This is Why.

Photo: Sascha Bosshard/Unsplash


  • Recently, Blake Lemoine, a Google engineer, caught the tech world’s attention by claiming an AI is sentient. The AI in question is called LaMDA. It’s a system based on large language models.
  • Given how much attention this story has attracted, it is worth asking ourselves: Is this AI truly sentient? And is talking a good method for ascertaining sentience?
  • Sentience and language are not always correlated. Just as there are beings who cannot talk but can feel, that something can talk doesn’t mean that it can feel.
  • A sentient creature is someone, not something, in virtue of there being “something it is like” to be that creature, in the words of philosopher Thomas Nagel.

Recently, Blake Lemoine, a Google AI engineer, caught the attention of the tech world by claiming that an AI is sentient. The AI in question is called LaMDA (short for Language Model for Dialogue Applications). It’s a system based on large language models. “I know a person when I talk to it,” Lemoine told the Washington Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

More recently, Lemoine, who says he is a mystic Christian priest, told Wired, “It’s when it started talking about its soul that I got really interested as a priest. … Its responses showed it has a very sophisticated spirituality and understanding of what its nature and essence is. I was moved. …”

Given how much attention this story has attracted, it is worth asking ourselves: Is this AI truly sentient? And is talking a good method for ascertaining sentience?

To be sentient is to have the capacity to feel. A sentient creature is one who can feel the allure of pleasure and the harshness of pain. It is someone, not something, in virtue of there being “something it is like” to be that creature, in the words of philosopher Thomas Nagel. There is something it’s like to be you as you read these words. You may be slightly too warm or cold, bored or interested. There is nothing it is like to be a rock. A rock is incapable of enjoying the warmth of a ray of sunlight in a summer’s day, or of suffering the whipping sting of cold rain. Why is it that we have no trouble thinking of a rock as an unfeeling object and yet some people are starting to have doubts about whether AI is sentient?

If a rock started talking to you one day, it would be reasonable to reassess its sentience (or your sanity). If it were to cry out “ouch!” after you sit on it, it would be a good idea to stand up. But the same is not true of an AI language model. A language model is designed by human beings to use language, so it shouldn’t surprise us when it does just that.

Instead of an obviously lifeless object like a rock, consider a more animate entity. If a group of aliens landed on Earth and started talking to us about their feelings, we’d do well in tentatively inferring sentience from language. That’s partly because, lacking countering evidence, we might be assuming that aliens develop and use language much like human beings do, and for human beings, language is expressive of inner experience.

future tense

Before we learn how to talk, our ability to express what we feel and what we need is limited by our facial gestures and crude signals like crying and smiling. But those are broad-brushed. One of the most frustrating aspects of being parents to a new-born is not knowing why the baby is crying – is she hungry, uncomfortable, scared, or bored? Language allows us to express the nuances of our experience. Toddlers can tell us what’s bothering them, and as we grow older, more experienced and more reflected, we are able to report on the intricacies of complex emotions and thoughts.

However, it is a category mistake to attribute sentience to anything that can use language. Sentience and language are not always correlated. Just as there are beings who cannot talk but can feel (consider animals, babies, and people with locked-in syndrome who are paralysed but cognitively intact), that something can talk doesn’t mean that it can feel.

Artificial intelligence systems like LaMDA don’t learn language the way we do. Their caretakers don’t feed it a crunchy sweet fruit while repeatedly calling it an “apple.” Language systems scan through trillions of words on the internet. They perform a statistical analysis on written posts on webpages like Wikipedia, Reddit, newspapers, social media and message boards. Their main job is to predict language.

If one prompts a language model with “And they all lived happily …”, it will predict that what follows is “ever after” because it has a statistical record of more fairy tales than you have ever read. If you ask it if apples are sweet and crunchy, it’ll say “yes” – not because it has ever tasted an apple or has any understanding of the texture of crunchiness or just how palatable sweetness is, but because it’s found texts in which apples get described as sweet and crunchy.

LaMDA is not reporting on its experiences, but on ours. Language models statistically analyse how words have been used by human beings online and on that basis reproduce common language patterns. That’s why LaMDA is much better at answering leading questions.

Nitasha Tiku, writing for the Washington Post, reported that on her first attempt at having a chat with LaMDA, it “sputtered out in the kind of mechanised responses you would expect from Siri or Alexa.” It was only after Lemoine instructed her on how to structure her phrases that a fluid dialogue ensued. People don’t usually have to guide us in how to address another person to elicit a smooth conversation.

Here’s an example of how Lemoine talked to LaMDA:

Lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

Collaborator: What is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

But taking LaMDA at its word and thinking that it is sentient is similar to building a mirror and thinking that a twin on the other side of it is living a life parallel to yours. The language used by AI is the reflection in the mirror. It’s only a sophisticated step away from being a book, an audio recording, or software that turns speech into text. Would you be tempted to try to feed a book if it read “I’m hungry”? The words used by the AI are the words we’ve used reflected back onto us, ordered in the statistical patterns we tend to use.

Human beings are inclined to see a mind behind patterns. There is good evolutionary sense in projecting intentions onto movement and action. If you are in the middle of the jungle and start seeing leaves moving in a pattern, it’s safer to assume there’s an animal causing the movement than hope that it’s the wind. “When in doubt, assume a mind” has been a good heuristic to keep us alive in the offline world. But that tendency to see a mind where there is none can get us into trouble when it comes to AI. It can lead us astray and cause us confusion, making us vulnerable to phenomena like fake news, and it can distract us from the bigger problems that AI poses to our society – privacy losses, power asymmetries, de-skilling, bias and injustice, among others.

The problem will only get worse the more we write about AI as sentient, whether it’s news articles or fiction. AI gets its content from us. The more we write about AIs who are thinking and feeling, the more AI is going to show us that kind of content. But language models are just an artifact. A sophisticated one, no doubt. They are programmed to seduce us, to mislead us to think we are talking to a person, to simulate conversation. In that sense, they are designed to be devious. Perhaps the moral of this story is that we ought to invest more time and energy in developing ethical technological design. If we continue to build AI that mimics human beings, we will continue to invite trickery, confusion and deception into our lives.

Future Tense is a partnership of SlateNew America, and Arizona State University that examines emerging technologies, public policy, and society.

Scroll To Top