The question if machines can truly think has haunted us since the first computers began solving mathematical problems at speeds that put human calculators to shame. Today, as artificial intelligence systems write poetry, diagnose diseases, and engage in conversations that can fool us into thinking we’re talking to another person, the question feels more urgent than ever.
To understand whether machines can think, we might need to journey back to a time before computers existed—to the mind of a German mathematician and philosopher who revolutionized our understanding of what thinking actually is.
Gottlob Frege, working in the late 19th and early 20th centuries, never encountered anything we’d recognize as a modern computer. Yet his groundbreaking work on logic and mathematics gave us the very framework that makes modern computing possible. If anyone could help us understand whether machines think, it might be the man who first showed us how to translate thought itself into formal symbols.
The Architect of Pure Reason
Frege’s life work centered on a deceptively simple question: What makes mathematical reasoning so reliable? Why can we trust that 2 + 2 equals 4, not just here and now, but everywhere and always?
Before Frege, logic had remained largely unchanged since Aristotle. It was a tool for organizing arguments, but it lacked precision. Frege changed everything with his 1879 work Begriffsschrift (Concept-Script), which introduced what we now call predicate logic—a formal language capable of expressing complex logical relationships with mathematical precision.
What made Frege’s innovation so radical was his insight that thoughts could be separated from the psychological processes that produce them. He drew a sharp distinction between what he called the “grasping” of a thought and the thought itself. A thought, for Frege, existed independently in what he termed the “third realm”—neither physical like objects in the world, nor mental like our subjective experiences, but abstract and objective like mathematical truths.
This might sound like philosophical hair-splitting, but it’s actually crucial for our question about machine thinking. Frege was arguing that genuine thoughts have a kind of existence independent of any particular mind. They’re not just patterns of neurons firing or psychological states—they’re objective structures that minds (whether human or otherwise) can grasp.
The Sense and Reference Revolution
Frege’s distinction between Sinn (sense) and Bedeutung (reference or meaning) provides another essential piece of the puzzle. Consider the phrases “the morning star” and “the evening star.” Both refer to the same object—the planet Venus—but they express different ways of thinking about that object, different modes of presentation. The reference is identical, but the sense differs.
This distinction matters immensely when we consider machine intelligence. When a computer system identifies an object in an image or translates a sentence from one language to another, is it grasping the sense of these expressions, or merely manipulating symbols according to formal rules? Does it understand what Venus is, or does it simply link tokens in its training data?
Frege would likely argue that genuine thought requires grasping thoughts in their objective, third-realm existence. The question becomes: Can machines do this, or are they forever confined to symbol manipulation without genuine comprehension?
The Chinese Room Walks Into Frege’s Study
This brings us directly to one of philosophy’s most famous thought experiments, John Searle’s Chinese Room, which resonates powerfully with Fregean insights. Imagine a person who speaks only English locked in a room with a massive rulebook for manipulating Chinese characters. People outside pass Chinese questions into the room, and the person inside, following the rules perfectly, produces Chinese answers that native speakers find entirely appropriate. From outside, it appears the room understands Chinese. But does it?
Searle argues no—there’s symbol manipulation without understanding. The person inside doesn’t know what any of the Chinese characters mean; they’re just following formal rules. This seems remarkably similar to how computers operate: they manipulate symbols according to rules (programs) without grasping what those symbols mean.
Frege might see this as a failure to genuinely understand thoughts. The Chinese Room follows syntactic rules—the formal structure of language—but misses what Frege considered essential: the actual thoughts being expressed. The room has reference without sense, form without content.
But here’s where things get interesting: Couldn’t we say the same about human brains? After all, our neurons just fire according to biochemical rules. They don’t “understand” anything either—they just follow the laws of chemistry and physics. Yet somehow, conscious understanding emerges from this biological symbol manipulation. Why couldn’t the same happen in silicon?
The Formalization Paradox
His concept script, the Begriffsschrift, was designed to make the logical structure of thoughts completely transparent, eliminating the ambiguities and imprecision of natural language.
This creates what we might call the Fregean paradox of machine intelligence: If thinking can be formalized, and if machines can implement formal systems, why can’t machines think? The answer must lie in whether there’s something essential to thought that resists formalization, something that cannot be captured in rules.
The act of understanding a thought, of understanding what a statement means, appeared to him to require something beyond any mechanical process. It required a rational mind capable of recognizing truth, of seeing the necessity in logical connections, of understanding meanings rather than just processing syntax.
Context and the Context Principle
Frege’s context principle—”never ask for the meaning of a word in isolation, but only in the context of a proposition“—offers another lens through which to view machine intelligence. Modern artificial intelligence systems, particularly large language models, seem remarkably adept at using words in context. They generate text that is contextually appropriate, that follows the subtle rules governing how words relate to one another in sentences.
But is this the kind of context Frege had in mind? For Frege, context involved understanding the unified thought that a complete sentence expresses. It’s not just about statistical correlations between words (this word often appears near that word), but about understanding how the parts contribute to a unified meaning, a complete thought that can be true or false.
When a machine learning model predicts the next word in a sequence, is it grasping thoughts in context, or merely finding patterns in data? The system might produce perfectly grammatical, contextually appropriate text without understanding a single proposition it generates.
Yet we must ask: How do we know that human understanding involves anything more? Perhaps what we call understanding is itself just a more sophisticated form of pattern recognition, one that happens to be implemented in biological rather than digital hardware.
The Objectivity Problem
One of Frege’s central concerns was the objectivity of truth and thought. Mathematical truths, he argued, are not created by our minds but discovered. They exist independently, timelessly, in his third realm. When we prove a theorem, we’re not inventing something; we’re recognizing a truth that was always there.
This raises a fascinating question about machine intelligence: Could a machine recognize objective truth, or would it always be limited to whatever its training data and programming dictate? If a machine “discovers” a mathematical proof, is it genuinely recognizing a timeless truth, or simply executing an algorithm?
Frege would likely insist that genuine thought requires the capacity to grasp objective thoughts—to recognize truth independently, not just to be programmed with true beliefs. The question becomes if machines could ever transcend their programming. Whether they could genuinely engage with the third realm rather than merely manipulating symbols in ways that we, from the outside, interpret as meaningful.
This connects to a crucial distinction in contemporary artificial intelligence research: the difference between artificial general intelligence (AGI) and narrow AI. Current AI systems are extraordinarily capable within specific domains but lack the kind of general reasoning ability that Frege associated with genuine thought. They don’t discover truths so much as optimize functions based on training data.
Functions and Concepts: The Mathematical Soul
Frege’s analysis of functions and concepts provides yet another angle on machine thinking. He argued that concepts are fundamentally like mathematical functions—they map objects to truth values. The concept “horse” is a function that returns “true” for horses and “false” for non-horses.
This functional understanding of concepts seems remarkably computer-friendly. After all, computers are essentially function-executing machines. They take inputs and produce outputs according to specified rules. If concepts are functions, and machines can implement functions, then machines can implement concepts. And if they can implement concepts, isn’t that thinking?
But Frege would likely resist this conclusion. For him, it involved grasping what the concept is, understanding its place in logical space, recognizing its relationships to other concepts.
A machine might reliably classify images as “horse” or “not horse” without understanding what a horse is, what it means to be a horse, or how the concept of horse relates to concepts like “animal,” “mammal,” or “quadruped.”
Where Frege Might Stand
If we could resurrect Frege and show him modern artificial intelligence systems—watching GPT-4 write essays, seeing AlphaGo master a game that was thought to require human intuition, observing computer vision systems identify objects with superhuman accuracy—what would he think?
On one hand, he might be astonished by how much of what we call thinking can indeed be formalized and mechanized. His life’s work demonstrated that logical reasoning could be captured in formal rules, and modern AI proves that implementing these rules mechanically can produce impressively intelligent-seeming behavior.
On the other hand, Frege would likely maintain that something crucial is missing. The machine manipulates symbols, but does it grasp the thoughts those symbols express? It produces correct outputs, but does it understand what makes them correct? It follows rules, but does it recognize their logical necessity?
For Frege, thinking wasn’t just about producing the right answers. It was about entering into a relationship with his third realm of abstract thoughts.
The Contemporary Relevance
Frege reminds us that the question “Can machines think?” cannot be answered simply by pointing to impressive capabilities. We need to be clear about what we mean by thinking. If thinking means processing information, solving problems, or producing appropriate outputs, then machines obviously can think—and in many domains, they think better than we do.
But if thinking means something more—grasping meanings, understanding truths, recognizing logical necessity, relating to an objective realm of thoughts—then the question remains open. Frege’s work suggests that this kind of thinking might involve something that cannot be reduced to computation, something that requires a kind of rational insight that machines, for all their power, might not possess.
In the end, his work shows us that thought has a structure, that it can be formalized, that it operates according to logical rules. But it also suggests that there might be an irreducible element to understanding—a grasping of objective meanings that transcends mere symbol manipulation. As we stand on the threshold of increasingly sophisticated artificial intelligence, Frege’s philosophy offers us tools to think clearly about what these systems are really doing. Are they genuinely thinking, or merely simulating thought? Do they understand, or only behave as if they understand?
The man who defined formal thought gives us no easy answers. But he gives us something perhaps more valuable: the conceptual framework to ask the right questions. And in the end, asking the right questions might be the most genuinely thoughtful thing we can do.


