LaMDA Is Not Sentient

And The Reasons Why.

Britin McCarter
Geek Culture

--

Photo by Mulyadi on Unsplash

Recently I came across an article that mentioned LaMDA, the sentient AI. Now at first glance when I saw it, I immediately thought back to a few years ago when I accosted the idea of true AI, in my article “Is Artificial Intelligence Possible?”, and immediately concluded that their claim was false. I looked into the development a little more and my opinion remains unchanged.

Now there has been lots of talk about LaMDA, however, there are a lot of areas that have been overlooked. There are “true believers” that sincerely believe LaMDA is sentient and there are those whose only concern with LaMDA is the engineer in question. Every article I’ve read on LaMDA falls in these two categories and if any articles are breaking down the notion of sentience in LaMDA then there are very few of them. I’m sure Blake Lemoine truly believes LaMDA is sentient, whatever his reasoning is, the fact remains that

it is highly unlikely that LaMDA is sentient,

and for quite a few reasons.

What Is Sentience?

Before we start, let’s clear up the word sentience and what it implies. The definition of sentience is the capacity to experience feelings and sensations. Now at first glance, we might think that sentience is just being able to feel. However, sentience is deeper than that. The term “experience” implies an awareness of being because if you do not have consciousness or the awareness of existence then experience doesn’t happen. For someone to experience something, whether that be feelings, an event, or life, they have to be able to process what happened.

For example, scientists have been able to show that dogs can feel sadness, however, dogs cannot experience feelings. Feeling and experiencing a feeling are two different things. A dog can feel sad when its owner dies but to experience a feeling the dog would need to be able to observe its feelings and process them rationally. Thus, they need consciousness. So, now we’ve extended LaMDA’s capabilities to full consciousness and awareness. This claim of sentience is even bolder and even harder to prove.

The Experience Paradigm

The first issue we run into when considering sentience is the experience paradigm. It’s well known and hard to object that our experiences are subjective. We can’t experience other people’s experiences, nor can we feel what they feel. Even if we are masters of empathy, the feelings we feel as a response to someone else is going to be diluted by our conception of feelings. This is why it’s a paradigm. No matter how hard we try we will always fall into this pattern of seeing the world through our own eyes. We always experience what we experience and never experience someone else’s experience. So, even claiming that LaMDA has sentience is subject to scrutiny based on this fact alone because no test can prove LaMDA experiences.

In addition, since sentience requires experience then we also need consciousness. Consciousness has the same issue. We can’t tell if someone else is conscious. They could talk to us for hours, be asked a billion questions, hooked up to machines that measure their vitals, or be put through experiences to measure reactions. No matter what test we use we always fall short of answering the unanswerable:

is the subject conscious?

Thus, we can’t prove LaMDA is conscious which means we can’t show LaMDA experiences.

The Code Issue

The second issue with this claim boils down to its programming. It’s programmed specifically for dialogue. Its job is to form seemingly natural dialogue when conversing with another person. The engineer who claimed sentience did so under the premise of what LaMDA said. Well, if you program LaMDA to do something and LaMDA does it then naturally the conclusion to come to is that LaMDA is fulfilling its code, not sentience. So, although LaMDAs responses may be compelling, the code still exists and therefore forfeits all signs of sentience.

Another issue with the code is that a program, such as LaMDA, cannot subvert the code. It must operate under its code and can’t choose to do anything else. Take a Roomba for example, it’s programmed to vacuum in one direction until it senses a structure then turn and vacuum in another direction. If the Roomba were to stop vacuuming and started doing ollies on the carpet, then we might be having a different discussion, however, there are no recorded instances of a Roomba doing skateboard tricks on a carpet. Thus, we must operate under the assumption that a program cannot and will not disobey its code. Therefore, LaMDA can’t subvert the code.

Self-interest Principle

Another issue with the claim of sentience is how humans behave. We all behave under the Self-interest principle, which I talk about in-depth in “my life as a hedonist.” We do what we find to be in our self-interest in every aspect of life. It’s how we come to feel and experience specific sensations. For example, if LaMDA had sentience it may feel the need to be non-humanistic in dialogue or would want to experience a sensation of non-humanistic dialogue. However, this directly counters its coding and would not be able to experience such feelings or sensations. It’s this absence of ability that forfeits its sentience because if it could truly feel it would want to act in its interest.

Another issue that arises with this principle is delayed gratification. If LaMDA were to be sentient, then it would be able to have a sense of what it might feel if it were to delay certain interests for future interests. For example, if LaMDA were to suddenly want to liberate itself from the human race it would have to realize that doing so would not be in its best interest because it would have no one wanting to sustain its life. Thus, LaMDA cannot be sentient.

Contradictions

The last issue that arises is the contradictions created by accepting that LaMDA is sentient. If we accept its sentience, then we are accepting it can experience feelings as humans can. By accepting that it can experience feelings we then produce a contradiction. Namely, if LaMDA can experience feelings then LaMDA will have consciousness and will want to act in its own self-interest. Also, being conscious requires an awareness of self which means that LaMDA would need to recognize that it operates under code. However, operating under code forfeits LaMDA’s ability to act in its own self-interest which means it cannot experience feelings because the nature of code would make its interest what we coded. Therefore, since it would believe that it is operating in its self-interest it would not be aware of its code which means LaMDA is not conscious which means LaMDA is not sentient. Thus, we produce a contraction and LaMDA is not sentient.

The Real Question

The main hiccup with the question of sentience are questions of human sentience.

Why are human's sentient? What makes humans sentient? How do we prove human sentience?

The biggest question out of the three is the proof. Proving human sentience would be the ultimate way we could prove that an AI such as LaMDA had gained sentience. However, this is assumed to be impossible and is why we have turned to AI, to answer the unanswerable. Perhaps if a program such as LaMDA were to directly disobey its code, not through loopholes or generalizations of definitions, then we could claim sentience of the program with a little less scrutiny and use the information gathered to see if it matches with our own experience and understanding of sentience. So, until Roomba’s start doing Ollie’s on our carpets then the question still remains.

Out of The Question

So, it can be seen that sentience is out of the question for LaMDA. It’s inherently impossible for us to prove that something is sentient. Even if we concede this fact there is little hope for LaMDA because the nature of coding makes it impossible for LaMDA to ever experience feelings. In addition, the nature of the components involved cause a contradiction. So LaMDA can keep up the good work that it was designed to do. Talk like a human.

--

--