What Does It Mean to Compute? The Church-Turing Thesis
Suppose you’re given a long multiplication problem: 347 × 892. You sit down with a pencil and paper, and you work through it step by step. You don’t need any special creativity or insight—you just follow the rules you learned. If you make no mistakes, you’ll get the right answer every time. Anyone else who follows the same rules will get the same answer too. That’s what mathematicians and philosophers call an effective method: a set of exact instructions that, if carried out without error, produces the right result in a finite number of steps, requires no ingenuity or intuition, and could in principle be done by a human being with nothing but paper and pencil.
Now here’s a strange thing. In the 1930s, a young mathematician named Alan Turing asked: What are the limits of such methods? Is there any problem that can be solved by following a clear set of rules? Or are some problems simply beyond what any step-by-step method can accomplish—even if we had unlimited time and paper?
That question led Turing and another logician, Alonzo Church, to something now called the Church-Turing thesis. It’s not a theorem you can prove like a mathematical equation. It’s more like a claim about what it means for something to be computable at all. And it’s still debated today.
The Machine in the Mind
Turing’s great insight was to ask: what is a computation, really? He didn’t start by thinking about electronic computers—there were hardly any in 1936. Instead, he thought about human beings doing calculations. In those days, “computer” meant a person whose job was to sit and compute numbers by following rules. Thousands of people worked as computers in businesses and research labs.
Turing imagined an idealized human computer, stripped down to the bare essentials. This person:
- Writes symbols on paper divided into squares
- Can only recognize and write a finite number of different symbols
- Can only look at a small number of squares at once
- Can only move their attention a short distance to see new squares
- Changes symbols only when looking directly at them
- Acts based on what they see and their current “state of mind”
- Has only a finite number of relevant states of mind
- Performs only simple elementary operations one at a time
Then Turing said: we can build a simple imaginary machine—now called a Turing machine—that does exactly what this human computer does. The machine has a long tape divided into squares (like an endless strip of paper), a scanner that reads and writes one square at a time, and a finite set of internal states. It follows instructions like “if you’re in state 5 and see a 1, erase it, write a 0, move left, and go to state 3.”
Here’s the crucial claim—Turing’s thesis: Anything that can be done by an effective method (a clear set of rules, no creativity needed) can be done by some Turing machine.
This is not obvious. Could there be effective methods that require something a Turing machine can’t do? Turing argued no, by showing that the essential features of human computation are exactly what his machine captures. If you disagree, you’d have to point to some feature of human rule-following that his analysis missed. Philosophers still argue about whether he got it right.
Different Roads to the Same Place
Around the same time, Alonzo Church came up with a completely different way to define computability, using something called lambda-definability (a system for defining functions through repeated substitution). It looked nothing like Turing’s machines. Yet Church proved that his system and Turing’s picked out exactly the same set of computable functions.
This was striking. Two people, working independently, using totally different approaches, arrived at the same boundary between what can and cannot be computed. Since then, many other formalisms have been added to the list—register machines, Markov algorithms, and more—and they all turn out to be equivalent. This convergence is one of the main reasons most mathematicians and computer scientists accept the Church-Turing thesis.
As Turing himself put it: “L.C.M.s [Turing machines] can do anything that could be described as ‘rule of thumb’ or ‘purely mechanical’.” By 1948, he said this was “sufficiently well established that it is now agreed amongst logicians” that “calculable by means of a Turing machine” is the correct precise version of the vague idea of an effective method.
The Problem That Started It All
Turing and Church weren’t just being theoretical. They were trying to solve a specific problem called the Entscheidungsproblem (German for “decision problem”). This was the problem of finding a method that could decide, for any logical formula, whether or not it was provable. The great mathematician David Hilbert thought such a method must exist. He called it “the main problem of mathematical logic.”
Both Church and Turing proved that there is no such method. No Turing machine—and therefore, if the thesis is correct, no effective method at all—can decide the provability of every logical formula. This was a bombshell. Hilbert had believed that every well-posed mathematical problem could be solved, one way or another. Church and Turing showed that some cannot.
To prove this, they used a clever trick now famous as the halting problem. Turing showed that no Turing machine can always correctly answer the question: “Will this other Turing machine eventually halt, or will it run forever?” If you could solve the decision problem, you could solve the halting problem too—and since the halting problem is unsolvable, so is the decision problem.
This might sound abstract, but it has a concrete meaning: there are questions we can formulate clearly that we cannot answer by any systematic method, no matter how clever we are. Not because we haven’t tried hard enough, but because it’s logically impossible.
Can Machines Do More Than Humans?
Here’s where things get tricky. The Church-Turing thesis is about what human beings can compute by following rules. But what about actual machines, made of metal and silicon? Could a machine do things no human computer could, even in principle?
This is a different question entirely. Philosopher Robin Gandy (Turing’s only PhD student) called it “Thesis M”: Whatever can be calculated by a machine can be calculated by a Turing machine. He pointed out that this is not the same as Turing’s thesis, and it might even be false.
Consider an “accelerating Turing machine”—an imaginary machine that performs its first operation in one second, the second in half a second, the third in a quarter of a second, and so on. Since 1 + 1/2 + 1/4 + 1/8 + … = 2, this machine can perform infinitely many operations in just two seconds. This would allow it to solve problems that ordinary Turing machines cannot, like the halting problem itself.
Such machines probably can’t exist in our universe (they’d violate physical limits), but they show that the Church-Turing thesis is about effective methods, not about all possible machines. The thesis says nothing about what a machine that breaks the laws of physics might do.
Modern physics raises even more interesting questions. Some researchers have shown that certain quantum systems might exhibit behavior that cannot be computed by any Turing machine. Others have designed “relativistic computers” that use gravitational time-dilation effects to perform calculations beyond Turing’s limits—at least in theory, without violating any known physical laws.
So the Church-Turing thesis, which seemed so definitive in 1936, now looks more like an open question when it comes to the physical world. Nobody really knows if there are physical processes—maybe even in our own brains—that go beyond what Turing machines can do.
Why It Still Matters
The Church-Turing thesis is sometimes described as “something between a theorem and a definition.” It’s not quite a theorem because it connects a precise mathematical concept (Turing machines) with an informal, intuitive one (effective methods). But it’s more than a definition because it makes a substantive claim: that the informal concept and the formal one line up perfectly.
This thesis underlies all of computer science. When we say a problem is “computable” or “uncomputable,” we mean computable by a Turing machine. If the Church-Turing thesis is correct, that’s the same as “computable by any effective method whatsoever.” The entire field of computability theory rests on this foundation.
But the thesis also raises deep philosophical questions. What is computation, really? Is it something that only happens in human minds and human-built machines, or is it a feature of the physical world itself? If a quantum system “computes” something that no Turing machine can, is it still computing? Or have we just stretched the word too far?
Turing himself was careful. He said his thesis “cannot be proved” and that the arguments for it are “fundamentally, appeals to intuition.” He compared the term “systematic method” to the word “vegetable”—something we understand well enough in ordinary conversation, but that becomes slippery when you try to pin it down exactly. Is coal a vegetable? What about coal gas, or fossilized trees, or viruses? Similarly, what counts as a “systematic method” can be hard to say for certain in borderline cases.
That’s part of what makes this whole topic fascinating. The Church-Turing thesis isn’t just a technical result in logic. It’s a claim about the nature of human reasoning, the limits of machines, and the relationship between mind and mechanism. Almost ninety years after Turing and Church published their papers, philosophers and scientists are still arguing about whether it’s true—and about what it would even mean for it to be true or false.
Appendices
Key Terms
| Term | What it does in this debate |
|---|---|
| Effective method | A set of exact, finite instructions that can be carried out by a human with no insight or creativity, producing a result in a finite number of steps |
| Turing machine | An imaginary machine with an infinite tape, a scanner, and a finite set of states—used as a precise model of what can be computed by an effective method |
| Church-Turing thesis | The claim that anything computable by an effective method is computable by a Turing machine (and vice versa) |
| Entscheidungsproblem | The problem of finding a method to decide whether any given logical formula is provable—shown to be unsolvable by Church and Turing |
| Halting problem | The problem of determining whether a given Turing machine will eventually stop or run forever—shown to be unsolvable |
| Lambda-definability | Church’s alternative formalism for defining computable functions, which turned out to be equivalent to Turing machines |
| Maximality thesis | The separate claim (not the same as Church-Turing) that any machine, not just human computers, can be simulated by a Turing machine |
Key People
- Alan Turing — A British mathematician who, at age 24, invented the Turing machine and proved that the decision problem is unsolvable. He also helped crack Nazi codes during World War II and designed early computers.
- Alonzo Church — An American logician who independently reached the same conclusions as Turing, using lambda-definability instead of machines. His approach was less intuitive than Turing’s.
- David Hilbert — A famous German mathematician who believed every mathematical problem could be solved. His challenge to find a decision method was what Church and Turing proved impossible.
- Robin Gandy — Turing’s only PhD student, who later distinguished between Turing’s thesis (about human computation) and the separate question of what machines can do.
Things to Think About
-
Turing’s analysis of human computation assumes that the computer has only a finite number of “states of mind” relevant to the calculation. But what if human thinking is more flexible than that—what if new states can emerge during a computation in a way Turing’s model doesn’t capture? The mathematician Gödel thought this was a real possibility. What do you think?
-
The thesis says that any effective method can be carried out by a Turing machine. But what counts as an effective method? Could there be methods that work but don’t fit Turing’s description—for example, methods that use random chance, or that change their own rules as they go? If so, what would that mean for the thesis?
-
Accelerating Turing machines are imaginary, but relativistic computers might be physically possible. If someone actually built a computer that could solve the halting problem (using time dilation, for example), would that disprove the Church-Turing thesis? Or would it just mean we need to rethink what “computable” means?
-
The Church-Turing thesis is about what humans can compute by following rules. But in practice, humans don’t actually compute like Turing machines—we get bored, make mistakes, run out of paper. Does the thesis describe something real, or just an idealization that doesn’t apply to actual human thinking?
Where This Shows Up
- Every computer you use is, in a precise sense, a physical implementation of a universal Turing machine. The limits Turing discovered apply to all digital computers.
- Artificial intelligence debates often invoke the Church-Turing thesis. Some argue that if human thought is just following rules, then a Turing machine could in principle replicate it. Others argue that human thinking involves something beyond rule-following.
- Quantum computing researchers debate whether quantum computers might break the “extended” Church-Turing thesis (which adds efficiency requirements). Some quantum algorithms seem to solve certain problems faster than any classical computer could.
- Biology has started to borrow these ideas: slime molds, bacterial colonies, and even enzymes are sometimes described as “computing” solutions to problems. Whether this is just a metaphor or something deeper depends partly on how we understand the Church-Turing thesis.