Table of Contents
There is a surgeon in an operating room. She nicks an artery. The patient bleeds out on the table. There will be lawsuits. There will be investigations. Her career might end. Her name will appear in newspapers. Strangers on the internet will call her a murderer.
Now imagine a robotic surgical system does the same thing. Same artery. Same patient. Same outcome. There will be a recall notice. Maybe a software patch. The manufacturer will release a statement expressing deep concern and commitment to patient safety. Within six months, the same model will be operating in hospitals again, possibly with a new version number.
Nobody will call the robot a murderer. Nobody will demand it feel remorse. And nobody will find this strange, because we have built an entire psychological infrastructure around the idea that machines deserve a kind of grace we would never extend to each other.
Dan Ariely, the behavioral economist who has spent decades studying the spectacularly irrational ways humans make decisions, would not be surprised by any of this. His work reveals something uncomfortable about our species. We do not judge actions. We judge actors. And when the actor is not human, the entire moral calculus changes in ways that should alarm us but mostly do not.
The Expectation Gap
To understand why we forgive machines so readily, you have to understand what we expect from them in the first place. And what we expect, paradoxically, is both perfection and nothing.
We expect perfection in the sense that we assume technology is advancing toward flawlessness. Every update should be better. Every new model should fail less. We hold a vague, almost religious faith that progress moves in one direction. But we also expect nothing from machines in a moral sense. We do not expect them to care. We do not expect them to try. We do not expect them to understand what they have done when they fail.
This creates an odd emotional loophole. When a human doctor makes an error, we feel betrayed. She was supposed to care. She took an oath. She looked us in the eye and said she would do her best. When she fails, it feels personal because it was personal. There was a person involved.
When a machine fails, there is no betrayal because there was no relationship. You cannot feel betrayed by your toaster. You can feel annoyed, frustrated, even enraged. But betrayal requires a broken promise, and promises require consciousness. Since we do not believe machines are conscious, we do not hold them to promises they never made.
Ariely is writing about human predictability in irrationality, but the same framework applies beautifully here. We are not being rational when we forgive robots more easily. We are being predictably irrational. We are running a cognitive program that sorts the world into agents who should know better and objects that cannot.
The Blame Vacuum
Here is where things get genuinely strange. When a human makes a mistake, blame has a clear address. It lands on a person with a face, a name, a medical license, a social media profile you can find in thirty seconds. Accountability is simple because it is personal.
When a machine makes a mistake, blame enters a kind of vacuum. Who is responsible? The engineer who wrote the code? The product manager who approved the release? The CEO who set the timeline? The regulator who certified the device? The user who did not read the manual? Responsibility diffuses across so many people that it effectively lands on nobody. And when blame lands on nobody, anger has nowhere to go. It dissipates. It fades. It becomes a policy discussion instead of a moral crisis.
This is not an accident. It is a feature of how we have organized the relationship between humans and technology. The corporate structure that produces machines is, almost by design, a blame diffusion engine. No single person is ever fully responsible, which means no single person ever faces the kind of fury we direct at an individual who fails.
Think about self driving cars. When a Tesla on autopilot kills a pedestrian, the public reaction is intense but strangely abstract. People debate regulation, software updates, sensor technology, the ethics of autonomous vehicles. What they do not do, generally, is demand that a specific engineer be imprisoned. Compare this to a drunk driver who kills a pedestrian. The public wants blood. They want a name, a mugshot, a sentencing hearing. The outcome is identical. A person is dead. But the emotional response is worlds apart because in one case there is a human villain and in the other there is a system.
The Narrative Problem
Ariely has argued that humans are fundamentally storytelling creatures. We do not process events as raw data. We process them as narratives. And narratives require characters, motivations, and moral arcs. When a human fails, the story writes itself. Hubris. Negligence. Greed. Incompetence. We know these plots. We have been telling them since the Greeks.
When a machine fails, the narrative collapses. There is no hubris in a software bug. There is no greed in a miscalibrated sensor. There is no moral arc in a system crash. Without a story to tell, we lose emotional traction. The incident becomes a technical problem rather than a human drama, and technical problems do not sustain outrage the way human dramas do.
This is actually a profound insight into how accountability works in modern society. We do not hold things accountable based on outcomes. We hold them accountable based on narratives. If we can tell a satisfying story about why something went wrong and who is to blame, we pursue justice aggressively. If we cannot tell that story, we shrug and move on. The quality of the narrative matters more than the magnitude of the harm.
Consider how this plays out with artificial intelligence. When a chatbot gives someone terrible medical advice and they follow it, the reaction is often a mix of sympathy for the user and mild criticism of the company. But when a human doctor gives the same terrible advice, the reaction is rage. Not because the advice was different, but because the story is different. The doctor should have known. The doctor had training. The doctor was arrogant or lazy or distracted. The chatbot was just doing what chatbots do, which is generate plausible sounding text without understanding any of it.
The Uncanny Valley of Accountability
There is a fascinating wrinkle here that Ariely’s work helps illuminate. As machines become more human like, our willingness to forgive them actually decreases. This mirrors the uncanny valley in robotics, where we find almost human faces more disturbing than clearly nonhuman ones. There appears to be an uncanny valley of accountability too.
A simple calculator that gives you a wrong answer gets zero blame. A navigation app that sends you the wrong way gets mild annoyance. A virtual assistant that misunderstands your request and books the wrong flight gets more frustration. And an AI system that looks you in the eye through a screen, speaks in a natural voice, and gives you advice that ruins your finances? That one gets something approaching the anger we would direct at a human.
The more a machine mimics personhood, the more we apply personhood standards to it. This means the forgiveness gap is not permanent. It is a function of perceived humanity. And as AI systems become more convincing in their imitation of human characteristics, we will likely start holding them to increasingly human standards.
This creates a strange incentive for companies. If you want your product to be forgiven easily, make it look less human. Keep it mechanical. Keep it obviously artificial. The moment you give it a friendly name, a conversational tone, and a simulated personality, you are importing human expectations. And with human expectations comes human judgment.
The Competence Paradox
Here is perhaps the most counterintuitive element of all. We forgive robots for mistakes partly because we believe they are more competent than humans. This sounds backwards, but it makes psychological sense.
If you believe a system is highly competent, then a failure seems like an anomaly. A glitch. A one time event that will be corrected. You do not lose faith in the entire system because of a single failure, just as you do not stop flying because of a single plane crash. The overall track record is too strong.
With humans, competence is always uncertain. You never truly know how good your doctor is, how attentive your pilot is, how careful your pharmacist is. Every interaction carries a background hum of doubt. So when a human fails, it does not feel like an anomaly. It feels like a confirmation. Ah, so they were not as good as I hoped. The failure validates a suspicion you were already carrying.
This is related to what Ariely describes in his work on trust. We extend a specific kind of trust to systems that is different from the trust we extend to individuals. System trust is statistical. It is based on aggregate performance. Individual trust is personal. It is based on character assessment. And character can be destroyed by a single act in a way that statistical performance cannot.
The Evolutionary Mismatch
There is one more layer worth examining, and it connects this entire discussion to something much older than robots or behavioral economics.
Human beings evolved to be exquisitely sensitive to threats from other humans. For most of our evolutionary history, the primary dangers we faced came from other people. Rivals. Enemies. Betrayers. Cheaters. Our brains developed sophisticated systems for detecting, remembering, and punishing human wrongdoing because doing so was a survival advantage. You needed to know who was dangerous, who was untrustworthy, who might steal your food or your mate.
We have no equivalent system for evaluating threats from tools. For hundreds of thousands of years, a tool was a rock or a stick. It did not have agency. It could not betray you. If it broke, you found another one. This deep evolutionary programming is still running. When a human wrongs us, every alarm in our ancient brain lights up. When a machine wrongs us, those alarms stay mostly silent because our hardware was never designed to process that kind of threat.
This is a mismatch that will only become more significant as machines become more powerful. We are using stone age threat detection software to navigate a world where the most consequential decisions are increasingly made by systems that do not trigger our threat response. The danger is not that we will be too hard on machines. The danger is that we will be too soft.
What This Means for the Future
The forgiveness gap between humans and machines is not just a psychological curiosity. It has real consequences for how we build, regulate, and deploy technology.
If we consistently hold machines to lower standards than humans, we create an incentive to replace humans with machines not because the machines are better, but because the machines are more easily forgiven when they fail.
Ariely would likely point out that this is another example of how our irrationality compounds. Each individual decision to forgive a machine more readily than a human seems reasonable in isolation. Of course we do not blame a machine the way we blame a person. Machines do not have intentions. But collectively, these decisions build a world where accountability gradually migrates from the visible to the invisible, from the personal to the institutional, from the courtroom to the terms of service agreement that nobody reads.
The solution is not to start screaming at our computers. The solution is to recognize that our instinct to forgive machines is not a sign of rationality. It is a cognitive bias, as predictable and as exploitable as any of the others Ariely has catalogued.
The next time a piece of technology fails you and you shrug it off, ask yourself a question. If a person had done the same thing, would you shrug? If the answer is no, it is worth asking why. Not because the machine deserves your anger. But because someone, somewhere, built that machine, deployed it, and profited from it. And that someone is a person.
And people, as we have established, we hold to a different standard entirely.


