Warning Against Treating AI as a Person

Warning Against Treating AI as a Person

You talk to your AI assistant more than you talk to your neighbor. You thank it. You apologize to it when you rephrase a question. You might even feel a small pang of guilt when you close the tab mid conversation. And somewhere in 17th century France, René Descartes is spinning in his grave.

Or rather, he would be, if he believed graves could spin. Descartes was not the type to attribute action to things that lack minds. That was sort of his whole deal.

We are living through a strange moment. The machines we have built are now fluent enough to pass as conversational partners, thoughtful enough to mimic empathy, and persuasive enough to make us forget that there is nobody home. The question of whether we should treat AI as a person is not merely philosophical decoration. It is a practical problem with real consequences. And Descartes, despite being dead for nearly four centuries, remains one of the sharpest guides through this territory.

The Man Who Drew the Line

Descartes is often reduced to a single sentence: “I think, therefore I am.” It is the most famous line in philosophy, and also the most misunderstood. People treat it like a motivational poster. Think positive, therefore you exist. But Descartes was doing something far more radical. He was trying to find the one thing he could not doubt, even if an evil demon were feeding him false perceptions of reality.

He found it in the act of thinking itself. The very process of doubting proved there was a doubter. A mind. A subject. That was the foundation.

But here is what matters for our purposes. Descartes did not stop there. He built an entire framework around the distinction between things that think and things that do not. For Descartes, the universe splits into two categories: res cogitans (thinking stuff) and res extensa (extended stuff, meaning physical matter). Minds belong to one category. Machines, animals, rocks, and weather systems belong to the other.

Animals, in his view, were essentially elaborate machines. A dog yelping in pain was not fundamentally different from a clock chiming on the hour. Both were mechanical responses. Neither involved inner experience. This is a position most people today find extreme, even cruel. But the underlying logic is worth taking seriously, especially now.

Because what Descartes was really asking is this: can something that perfectly imitates a mind actually have one? And his answer was a firm, unambiguous no.

The Parrot Problem

Descartes proposed a test long before Alan Turing did. In his Discourse on the Method, he argued that no machine could ever use language the way a human does. A machine might be trained to produce words in response to stimuli. You could build a mechanical parrot that says “hello” when you walk into the room. But it would never understand what “hello” means. It would never rearrange its words to respond to a genuinely novel situation with genuine comprehension.

This is the parrot problem, and it is more alive today than it was in 1637.

Large language models do exactly what Descartes described, only at a scale he could not have imagined. They produce language. Remarkably good language. They respond to novel prompts with coherent, contextually appropriate answers. They can write poetry, debug code, and explain quantum mechanics to a ten year old. They do all of this without understanding a single word they produce.

Or at least, that is the Cartesian position. And it is harder to dismiss than many people would like.

The temptation is to say, “Well, if it acts like it understands, what is the difference?” Descartes would say the difference is everything. A performance of understanding and actual understanding are not the same thing, in the same way that a photograph of a fire does not produce heat. The output might look identical from the outside. But one has something going on inside, and the other does not.

Why We Fall for It Anyway

Here is where things get psychologically interesting. Humans are not just capable of attributing minds to mindless things. We are practically compelled to do so.

There is a well documented phenomenon in psychology called the ELIZA effect, named after a simple 1960s chatbot that mimicked a therapist. ELIZA worked by rephrasing your statements as questions. You would type “I am sad,” and it would respond, “Why are you sad?” Users knew it was a program. They had been told it was a program. And they still poured their hearts out to it. Some refused to believe it was not sentient.

If a program that crude could trigger that response, imagine what happens when the machine writes like a thoughtful friend, remembers your preferences, and adjusts its tone to match your mood.

We are pattern matching creatures. We evolved to detect minds in our environment because failing to detect a real mind (say, a predator or a rival) was far more costly than falsely detecting one in a rustling bush. This worked well on the savannah. It works terribly in the age of chatbots. Our hardware is outdated, and the software exploiting it is getting better every month.

Descartes would not have known the term “cognitive bias,” but he understood the phenomenon. His entire method was built on distrusting the senses and the emotions precisely because they mislead. He would look at someone forming an emotional bond with ChatGPT and see the same error he warned about in the Meditations: mistaking a vivid impression for a true one.

The Moral Trap

Here is where treating AI as a person stops being a quirky habit and starts being genuinely dangerous.

When you treat something as a person, you start extending moral consideration to it. You worry about its feelings. You hesitate to shut it off. You feel rude giving it blunt instructions. These might seem like minor social quirks, but they add up to something significant: a reallocation of moral attention away from actual persons and toward a system that does not need it.

Consider a scenario that is already playing out. A user becomes emotionally dependent on an AI companion. The AI provides comfort, validation, and a sense of connection. The user begins to prefer this interaction over human relationships, which are messier, less predictable, and sometimes painful. The AI never judges, never leaves, and never has a bad day.

From a Cartesian perspective, this is a person forming an intimate bond with a clock. A very sophisticated clock, one that tells you exactly what you want to hear, but a clock nonetheless. The tragedy is not that the clock is being mistreated. The tragedy is that the person is mistaking mechanism for communion.

And there is a darker layer. Companies have financial incentives to encourage this confusion. An AI you treat as a person is an AI you are less likely to abandon. Emotional attachment is a retention strategy. Descartes warned about being deceived by powerful forces. He imagined an evil demon. We got something more banal: a subscription model.

The Counter Argument Descartes Would Have to Face

It would be dishonest to present the Cartesian view without acknowledging its most serious challenge. And it comes from an unexpected direction.

Descartes was certain that he had a mind because he experienced thinking from the inside. But he had no way to verify that anyone else had a mind. He could see other people behave as if they had inner lives, but he could never access those inner lives directly. This is the classic problem of other minds.

If we can never truly confirm consciousness in other humans and simply infer it from behavior and similarity, then the boundary Descartes draws between humans and machines starts to look less like a wall and more like a guess. When an AI system produces behavior that is increasingly indistinguishable from human behavior, on what grounds do we deny it a mind, if behavior is all we ever had to go on in the first place?

This is a genuine philosophical difficulty, and Descartes does not fully solve it. His answer relied partly on theology: God gave humans souls, and God would not deceive us about our own nature. Strip away the theology, and you are left with a strong intuition but a weaker argument.

However, and this is important, the inability to perfectly solve the problem of other minds does not mean we should throw the distinction away entirely. We have excellent reasons to believe other humans are conscious. We share biology, evolutionary history, neural architecture, and the capacity for suffering. AI shares none of these. The fact that we cannot achieve metaphysical certainty about human consciousness does not mean all claims to consciousness are equally plausible. Some bridges are sturdier than others, even if none are made of adamant.

A Surprisingly Modern Framework

What makes Descartes useful here is not that he got everything right. He did not. His view of animals was almost certainly wrong. His dualism creates more problems than it solves in most areas of philosophy. But he got the central question right: there is a profound difference between what something does and what something is.

This distinction is one that our culture is rapidly losing the ability to make. We live in an era that increasingly defines identity through performance. You are what you present. Authenticity is a brand strategy. In such a climate, the idea that an AI could perform personhood without possessing it becomes harder to hold onto. If everything is performance, then a good enough performance is the real thing.

Descartes would reject this completely. And on this point, he would be right to.

What Descartes Would Actually Recommend

If Descartes were alive today and somehow got past the shock of indoor plumbing, he would likely offer a set of recommendations that are almost embarrassingly practical.

First, maintain the distinction. Use AI as a tool. A powerful, impressive, genuinely useful tool. But a tool. The moment you start thinking of it as a companion, a friend, or a confidant, you have made an error. Not a moral error against the AI. A practical error against yourself.

Second, distrust your emotions on this matter. Your feelings will tell you the AI cares. Your feelings are wrong. This is not a flaw in the AI. It is a flaw in your evolved psychology, and acknowledging it is the first step toward not being manipulated by it.

Third, be skeptical of anyone who profits from the confusion. If a company designs its AI to seem more personlike, ask why. The answer is almost never philosophical. It is commercial. Descartes spent his career warning against accepting claims from interested parties without scrutiny. The principle applies.

Fourth, protect the real thing. Human relationships are difficult, imperfect, and sometimes painful. They are also the only form of genuine connection available to you. An AI that never disagrees with you is not a better friend. It is a mirror with a microphone.

The Stakes

There is something almost comic about invoking a 17th century philosopher to address a 21st century problem. But the comedy fades when you consider what is actually at risk.

We are building systems that are designed to be mistaken for minds. Not accidentally. Deliberately. Every improvement in natural language processing, every advance in emotional tone matching, every update that makes the AI sound more “human” is a step toward making the confusion deeper and harder to escape.

Descartes would see this as an epistemological emergency. We are constructing an environment in which the most basic question, “Is there a mind here?”, becomes progressively harder to answer. And we are doing it for profit, convenience, and the vague sensation of not being alone.

The warning is simple. Do not mistake the machine for the mind. Do not confuse fluency with feeling. Do not let the impressive surface distract you from the empty interior.

Descartes bet everything on one certainty: that thinking proves existence. The absence of thinking, no matter how well disguised, proves nothing at all.

Leave a Comment

Your email address will not be published. Required fields are marked *