The Rothbardian case against the protective state
Table of Contents
You lock your front door at night. You check reviews before buying a car seat for your kid. You read the ingredients on food labels. You do these things because you care about safety. Nobody had to pass a law to make you care.
And yet, somewhere along the way, we decided that caring wasn’t enough. We decided we needed a vast apparatus of government agencies, licensing boards, and regulatory commissions to keep us safe from ourselves. The assumption is so deeply baked into modern life that questioning it feels almost reckless. Of course we need the FDA. Of course we need building codes. Of course we need occupational licensing. Without them, chaos. Without them, danger. Without them, death.
Murray Rothbard thought this was nonsense. Not because he didn’t care about safety, but because he believed the entire framework was built on a lie. The lie isn’t that safety matters. The lie is that governments are good at providing it and that the alternatives are worse.
What if the very institutions designed to protect you are the ones quietly putting you at risk?
The seen and the unseen
There’s an old idea in economics that Rothbard borrowed heavily from, originally articulated by the French economist Frédéric Bastiat. It goes like this: every policy has effects you can see and effects you cannot see. The visible effects get all the attention. The invisible effects do all the damage.
When the FDA approves a drug, that’s visible. Press releases go out. Patients celebrate. The system works, we tell ourselves. But when the FDA delays a drug for years, the people who die waiting are invisible. They don’t show up at press conferences. They don’t file complaints. They just quietly cease to exist, and nobody connects their deaths to a bureaucratic timeline.
Rothbard understood this asymmetry better than most. He argued that regulators face a fundamentally lopsided incentive structure. If a regulator approves something that turns out to be harmful, the consequences are immediate and public. Hearings. Headlines. Careers destroyed. But if a regulator delays or blocks something that could have saved lives, nothing happens. No journalist writes that story. No congressional committee investigates the absence of a product.
So what does any rational bureaucrat do? They delay. They demand more studies. They add more requirements. They err so far on the side of caution that caution itself becomes the danger.
The economist Sam Peltzman documented this effect with automobile safety regulations. After mandatory seatbelt laws were introduced, drivers did indeed survive crashes at higher rates. That’s the seen effect. But drivers also started driving more recklessly, because they felt safer. Pedestrian and cyclist deaths went up. Total fatalities barely changed. The risk didn’t disappear. It just got redistributed to people who had no say in the matter.
Rothbard would not have found this surprising at all.
The knowledge problem wearing a badge
One of the most powerful arguments against centralized regulation doesn’t come from Rothbard directly but from his intellectual neighbor, Friedrich Hayek. Hayek argued that knowledge in society is dispersed. No single person or committee can know enough to make good decisions for millions of people in thousands of different situations.
Rothbard took this insight and sharpened it into a blade. When a government agency writes a safety regulation, it is essentially claiming to know the right answer for every business, every consumer, every context, and every future scenario. That’s not confidence. That’s delusion.
Think about restaurant health codes. On the surface, they seem perfectly reasonable. Nobody wants to eat at a place where the kitchen is a biohazard. But here’s what the regulation actually does: it creates a single standard that applies equally to a five star restaurant in Manhattan and a taco truck in rural Texas. The fine dining establishment already has every incentive to maintain spotless conditions because its reputation is its entire business model. The taco truck might be serving the best food in the county out of a setup that doesn’t technically comply with codes written for commercial kitchens.
The regulation doesn’t distinguish between these cases. It can’t. That’s the point. Centralized rules are blunt instruments pretending to be surgical tools.
Rothbard argued that the market already contains a sophisticated, decentralized safety apparatus. It’s called reputation. It’s called competition. It’s called the fact that a business that kills its customers tends not to have customers for very long. These mechanisms aren’t perfect, but they’re adaptive. They process local information. They respond to actual conditions rather than to what some committee imagined conditions might be three years ago when they drafted the rule.
Licensing: protection racket with a government seal
If you want to see Rothbard’s critique in its purest form, look at occupational licensing. In the United States, you need a government license to cut hair in every single state. In some states, becoming a licensed cosmetologist requires more training hours than becoming an emergency medical technician.
Read that again. Styling someone’s hair is treated as more dangerous than saving someone’s life in an ambulance.
Rothbard saw occupational licensing for exactly what it is: a cartel arrangement dressed up in the language of public safety. Existing practitioners lobby for licensing requirements that raise the barrier to entry for new competitors. The public is told this protects them from unqualified providers. What it actually does is reduce competition, raise prices, and prevent people, often poor people who can’t afford years of mandated training, from earning a living.
The economist Morris Kleiner has spent decades studying this. His research shows that licensing raises wages for licensed practitioners by 10 to 15 percent while providing little to no measurable improvement in service quality. The safety argument is a fig leaf. The real function is economic protection for insiders at the expense of everyone else.
The moral hazard machine
Safety regulations don’t just fail to make you safer. In many cases, they actively make you less careful.
This is the concept of moral hazard, and it’s one of the most important ideas in economics. When you insulate people from the consequences of their choices, they make worse choices. When you tell people that a government agency has certified something as safe, they stop doing their own research. They stop asking questions. They outsource their judgment to a bureaucracy and go about their day.
Before the FDA existed, consumers were highly attentive to what they bought and who they bought it from. Brand reputation was everything. Companies like Heinz built entire empires on the promise of purity and quality, not because the government told them to, but because customers demanded it and competitors were waiting to steal market share from anyone who slipped up.
The creation of a government safety stamp didn’t add a layer of protection on top of this market discipline. It replaced it. Once people believed the government was watching out for them, they stopped watching out for themselves. The muscle of consumer vigilance atrophied.
Rothbard saw this dynamic everywhere. Bank deposit insurance was supposed to protect savers. Instead, it encouraged banks to take insane risks because depositors no longer bothered to evaluate whether their bank was sound. Why would they? The government guaranteed their money. The 2008 financial crisis was not a failure of deregulation, as the popular story goes. It was a failure of the moral hazard created by decades of implicit and explicit government guarantees.
You could argue this is the most damaging effect of all. Regulations don’t just fail at their stated purpose. They destroy the very social mechanisms that would otherwise do the job better.
The uncomfortable comparison
Here’s a thought experiment Rothbard might have appreciated. Imagine two worlds.
In the first world, there is no government food safety agency. Consumers rely on private testing labs, brand reputation, online reviews, industry associations, and their own judgment. Some bad products slip through. Some people get sick. The failures are visible and immediate, and the market punishes them harshly.
In the second world, a government agency certifies all food as safe. Consumers trust the stamp and stop paying attention. The agency is underfunded and can only inspect a fraction of facilities. Political pressure from agricultural lobbies weakens standards. A contamination event happens anyway, but it takes weeks to identify because everyone assumed the system was working.
Which world is actually safer? The first world has visible, small scale failures that create constant pressure for improvement. The second world has invisible, systemic failures that accumulate until they explode.
This is not a hypothetical. This is roughly what happens every time there’s a major food recall. The system that was supposed to prevent the problem becomes the reason nobody caught it sooner.
What Rothbard actually wanted
It’s worth being clear about what Rothbard was and wasn’t arguing. He wasn’t saying safety doesn’t matter. He wasn’t saying businesses are angels who would never cut corners. He wasn’t saying nobody ever gets hurt in a free market.
He was saying that the choice isn’t between government regulation and chaos. The choice is between a centralized, slow, politically compromised system that creates false confidence and a decentralized, adaptive, competitive system that creates real accountability.
He was saying that the word “safety” on a government letterhead doesn’t actually make anyone safe. It just makes people stop asking whether they are.
He was saying that the people most harmed by safety regulations are almost always the people those regulations claim to protect: the poor who can’t afford higher prices, the sick who can’t access delayed treatments, the entrepreneurs who can’t clear regulatory hurdles, the consumers who trust a label instead of their own judgment.
The quiet question
There’s a reason this argument makes people uncomfortable. It challenges something deeper than policy preferences. It challenges the idea that there is a parental authority somewhere above us, making sure everything is okay. Rothbard’s critique isn’t really about economics. It’s about adulthood. It’s about whether we trust ourselves and each other to navigate risk, or whether we need a bureaucratic permission slip to live our lives.
Every regulation is a small vote of no confidence in human beings. Rothbard thought we deserved better than that.
Whether you agree with him entirely or not, his core observation is hard to dismiss: a society that outsources its safety to a centralized authority doesn’t become safer. It becomes more fragile. It loses the habits of vigilance, the culture of accountability, and the competitive pressures that actually prevent harm.
The next time someone tells you that a new regulation will keep you safe, ask a simple question: safe from what? And then ask the harder one: at what cost?
The answers might make you less comfortable. But according to Rothbard, discomfort is where real safety begins.


