Table of Contents
In 1821, David Ricardo did something unusual for an economist who had spent his career defending machinery. He changed his mind. In the third edition of his Principles of Political Economy and Taxation, he added a new chapter titled “On Machinery,” in which he admitted that the introduction of machines could, in fact, hurt workers. Not temporarily. Not as a minor side effect. But structurally, permanently, in ways that reshaped who earned what and why.
Ricardo had watched the steam engine rearrange England. Handloom weavers who once commanded decent wages found themselves replaced by machines that could do the same work faster, cheaper, and without complaint. The weavers did not lack skill. They did not lack effort. They lacked relevance. The economy had simply decided it no longer needed what they were selling.
Now replace “handloom weaver” with “copywriter.” Replace “steam engine” with “ChatGPT.” The parallel is not poetic. It is structural.
The Machine That Eats Thought
For two centuries, the story of automation was simple enough to fit on a bumper sticker: machines take over physical labor, humans retreat into mental labor, everyone moves up the value chain. Factory workers become office workers. Assembly line hands become analysts. The mind was the last safe refuge, the one domain where carbon still beat silicon.
That story is over.
What makes ChatGPT different from every previous wave of automation is not that it is smarter than people. It is not. What makes it different is the type of labor it displaces. For the first time, the machine is not coming for your back. It is coming for your keyboard. It generates legal briefs, marketing copy, code, lesson plans, strategic memos, and undergraduate essays with a competence that ranges from passable to genuinely impressive. It does not do these things the way a human does them. It does them the way a steam engine wove cloth. Not better in every dimension, but faster and cheaper in the dimensions that employers actually pay for.
Ricardo would have recognized this immediately. His insight was never about machines being good or bad. It was about what happens to the price of labor when a cheaper substitute appears. When the steam engine could produce textiles at a fraction of the cost of human weavers, the question was not whether the weavers were talented. The question was whether anyone would pay them their old wages when a machine could deliver 80% of the output at 10% of the cost.
That same math now applies to a startling range of knowledge work.
The Compensation Principle and Its Comfortable Lie
Economists after Ricardo spent a great deal of energy softening his uncomfortable conclusion. They developed what is sometimes called the compensation principle: yes, machines destroy some jobs, but they create new ones. The displaced weaver becomes a machine operator. The displaced machine operator becomes a programmer. The economy reshuffles, and in the long run, everyone is better off.
This principle is not wrong, exactly. It is just incomplete. And its incompleteness matters enormously right now.
The compensation principle assumes that displaced workers can transition to the new roles that technology creates. For physical labor, this often held true, if painfully and slowly. A farmer could learn to operate a factory machine. A factory worker could learn to use a computer. The transitions were hard, but the skills involved were at least categorically different from what the machine was doing. You moved from muscle work to mind work, and the machine could not follow you there.
But where does the knowledge worker go when the machine follows them into the domain of thought? If ChatGPT can write serviceable marketing copy, the copywriter cannot simply “move up” to more complex writing, because the next version of the model is already creeping up that ladder. The escape hatch that existed for two hundred years, the retreat into cognitive complexity, is closing. Not because AI is truly intelligent, but because a remarkable amount of what we called “knowledge work” turns out to have been pattern matching all along.
This is the part that stings. Not that the machine is brilliant, but that much of professional intellectual labor was less brilliant than we believed.
Ricardo and the Rent Seekers of the Mind
There is a concept in Ricardo that does not get enough attention in conversations about AI, and it is the concept of rent. In Ricardian economics, rent is what you earn not because of what you do, but because of what you control. Landlords earned rent not because they were productive, but because they owned land that others needed. The rent had nothing to do with effort or talent. It was a tax on scarcity.
A significant portion of knowledge work compensation has functioned as a kind of intellectual rent. Lawyers did not earn high fees purely because legal reasoning is difficult. They earned high fees partly because legal knowledge was scarce, access to it was restricted, and the cost of acquiring it was enormous. The same was true for consultants, analysts, doctors interpreting routine cases, and financial advisors giving standard portfolio advice. Much of what these professionals sold was not unique insight. It was access to a body of knowledge that most people could not easily obtain.
ChatGPT is the great rent destroyer. It does not just automate tasks. It democratizes access to knowledge that was previously locked behind expensive credentials and professional gatekeeping. When anyone with an internet connection can generate a competent first draft of a legal contract, the scarcity premium on basic legal knowledge collapses. When a small business owner can ask an AI to build a marketing strategy and get something reasonable back in thirty seconds, the mid tier marketing consultant has a problem that no amount of personal branding will solve.
Ricardo saw the same dynamic with land. When new agricultural techniques increased the productivity of previously marginal land, the rents on the best land fell. The landlords did not become less wealthy overnight. But the structural advantage they held was eroding. The parallel to today is almost too clean. The “land” that knowledge workers occupied, their exclusive access to specialized information and competent synthesis, is becoming less scarce by the month.
The Luddite Fallacy Is Not Always a Fallacy
Here is where things get uncomfortable for the technology optimists. There is a popular argument that worrying about technological unemployment is the “Luddite fallacy,” a mistake people have made with every new technology since the spinning jenny. Machines always create more jobs than they destroy, the argument goes. Just give it time.
But calling it a fallacy assumes the conclusion before examining the evidence. The original Luddites were not wrong about their own lives. They were textile workers who saw their livelihoods destroyed in real time. The fact that the British economy eventually created new jobs does not retroactively make their suffering a fallacy. It makes it a tragedy that happened to precede a recovery. And that recovery took decades, involved enormous social upheaval, and required entirely new institutions like labor unions, public education, and social safety nets to manage the transition.
The question is not whether AI will eventually lead to new forms of work. It probably will. The question is what happens in the intervening period, and who bears the cost. Ricardo understood this. His chapter on machinery was not a prediction of permanent unemployment. It was a warning that the transition could be devastating for specific classes of workers, even if the aggregate economy benefited.
The intellectual class is now that specific class. And the irony is rich enough to be its own dessert course. For two centuries, it was the educated professional class that told factory workers and manual laborers to “adapt,” to “reskill,” to “embrace change.” Now the same disruption has arrived at their own desk, and suddenly the nuances of technological displacement seem a lot more pressing.
What Ricardo Could Not Have Predicted
There is one dimension of the current moment that genuinely exceeds anything in Ricardo’s framework, and it involves the speed of iteration. The steam engine improved over decades. The power loom was refined across generations. Workers displaced by early industrial machines had, in theory, a lifetime to adapt. Their children could train for the new economy.
AI does not work on that timeline. GPT-3 was impressive. GPT-4 was substantially better. Each iteration narrows the gap between what the machine produces and what a skilled human produces. The knowledge worker is not competing against a fixed opponent. They are competing against a system that improves on a curve that makes Moore’s Law look leisurely.
This changes the calculus of adaptation. It is one thing to say “learn new skills” when the new skills will be relevant for twenty years. It is another thing entirely when the new skills might be automated before you finish the online course teaching them. The treadmill is not just moving. It is accelerating. And the people on it are beginning to notice.
The Surprising Connection to Veblen
There is an interesting lens here from Thorstein Veblen, the economist who gave us the idea of “conspicuous consumption.” Veblen argued that much of what the upper class consumed was not about utility but about signaling status. The expensive watch does not tell time better than a cheap one. It tells other people you can afford an expensive watch.
A similar dynamic has quietly operated in knowledge work for decades. The consultant’s polished slide deck, the lawyer’s meticulously formatted brief, the analyst’s fifty page report. Much of this output was not valued for its informational content alone. It was valued because it signaled that a credentialed professional had spent expensive hours producing it. The quality of the thinking often mattered less than the prestige of the thinker.
ChatGPT disrupts this signaling mechanism with almost comic efficiency. When a machine can produce the polished slide deck in minutes, the signal value collapses. Clients begin to wonder what, exactly, they were paying for. Was it the insight, or was it the theater of expertise? For many professionals, the honest answer is uncomfortable.
So Who Survives?
Ricardo’s framework actually offers some guidance here, though it is not the guidance most people want.
In Ricardo’s world, the workers who survived mechanization were not the ones who tried to compete with machines on the machine’s terms. No weaver beat a power loom by weaving faster. The survivors were those who did things machines could not do, or who found ways to use the machines as tools that amplified uniquely human capabilities.
The same logic applies now. The knowledge workers who will thrive are not those who write faster than ChatGPT or analyze data more efficiently than an algorithm. They are the people who do what the machine structurally cannot: navigate genuine ambiguity, exercise judgment in situations where the stakes are real and the data is incomplete, build trust through human relationships, and ask questions that no one has thought to ask before.
The irony is that these are the skills that professional education has systematically undervalued for decades. Business schools taught frameworks and models. Law schools taught precedent and procedure. Medical schools taught diagnosis protocols. The soft, messy, irreducibly human skills, the ability to sit with uncertainty, to read a room, to know when the data is lying, were treated as secondary. Nice to have, but not the core product.
They are now the core product. Everything else is becoming a commodity.
Ricardo’s Unfinished Warning
David Ricardo died in 1823, just two years after publishing his revised views on machinery. He did not live to see the full consequences of industrialization. He did not see the decades of worker misery that preceded the eventual economic gains. He did not see the Chartist movement, the Factory Acts, or the slow, painful construction of a social contract that made industrial capitalism survivable for ordinary people.
We are at a similar inflection point. The steam engine of the intellectual class is here. It is not going away. It is getting better. And the question is not whether it will reshape the economy of thought. That is already happening. The question is whether we will manage the transition with more wisdom than the Victorians managed theirs, or whether we will repeat the same pattern: a long period of disruption and suffering, followed by institutional reforms that arrive a generation too late.
Ricardo warned us. We just assumed the warning was meant for someone else.


