Can Everything Be Computed?
Here’s a strange thing you might not have thought about: when you add 347 and 589, you’re following a procedure. You line up the numbers, add the digits from right to left, carry when you need to. The procedure works every time, for any two numbers you give it. That’s what an algorithm is—a step-by-step recipe that always gives you an answer, never gets confused, and works for any input of the right kind.
Now here’s the question that kept some of the smartest people of the twentieth century awake at night: Is there anything that can’t be computed?
Not just “something we haven’t figured out how to compute yet.” Something that cannot be computed, ever, by any algorithm, no matter how clever or how powerful the computer running it. A question that is forever beyond the reach of any step-by-step recipe.
That’s a strange idea. Most of us assume that if you can state a clear question with a definite answer, there must be some way to get that answer—maybe a long way, maybe a way we haven’t discovered yet, but a way. What if that assumption is wrong?
The Recipe-Checking Problem
Let’s start with something concrete. Suppose someone hands you a recipe. Not a cooking recipe—a set of instructions for solving some problem. How can you tell if the recipe actually works? You might test it on a few examples, but that doesn’t prove it works for everything. A recipe that works for 47 and 589 might fail on some other numbers you haven’t tried yet. So you need a way to check whether the recipe is correct.
Now imagine you want to write a meta-recipe: a procedure that takes any recipe as input and tells you whether that recipe will always produce the right answer. This is not a dumb idea—it’s the kind of thing mathematicians and computer scientists dream about. A single algorithm that could check all other algorithms. That would be incredibly powerful.
In the 1930s, a logician named Alonzo Church found a way to prove that this meta-recipe cannot exist. Not because it’s too complicated to figure out, but because it’s impossible in principle. He proved this by doing something very clever: he invented a precise mathematical way of talking about what “computation” means, and then showed that inside that system, there are problems that no algorithm can solve.
What Does “Computable” Even Mean?
Before Church, people talked about “effective procedures” and “algorithmic methods” in a fuzzy, everyday sense. You knew one when you saw one. But Church realized that if you want to prove something can’t be computed, you need a precise definition of what counts as “computable.” Otherwise, someone could always say: “Well, maybe there’s some clever way we haven’t thought of yet.”
So Church invented something called the lambda calculus (he used the Greek letter λ, which is why it’s called that). This was a completely formal system for defining functions—mathematical rules that take an input and produce an output. It was like a programming language before computers existed. You could write down definitions like “the function that doubles its input” using a few symbols and some strict rules for how to manipulate them.
Church showed that the functions you can define in the lambda calculus are exactly the ones you could compute if you had unlimited time and paper and followed a fixed, mechanical procedure. This is now called Church’s Thesis, and it’s one of the most important ideas in computer science.
The tricky part: Church’s Thesis is not something you can prove mathematically. It’s a claim that a precise mathematical definition (lambda-definable functions) captures an intuitive, informal idea (what a human being can compute by following a recipe). There’s no mathematical proof that the definition gets it right—you just have to look at the evidence and decide whether it seems plausible.
The Evidence
Here’s the main evidence. After Church came up with the lambda calculus, several other people independently came up with completely different definitions of “computable.” Alan Turing (who was actually Church’s student for a while) invented the Turing machine—an imaginary device with a tape and a read/write head that can move left and right, reading and writing symbols. Turing argued that anything a human mathematician can do while computing can be simulated by one of these simple machines.
Other mathematicians—Gödel, Kleene, Post—came up with still other definitions, all based on different ideas about what computation involves. And here’s the shocking part: every single one of these definitions turned out to pick out exactly the same class of functions. The lambda calculus, Turing machines, recursive functions (Gödel’s approach)—they all produce the same set of computable functions.
That’s a lot of evidence that Church’s Thesis is right. It’s hard to believe that all these different approaches, starting from different intuitions, would accidentally converge on the same class if there were something wrong with them.
The Problem You Can’t Solve
OK, so Church had a definition. Now he could ask: is there any function that isn’t computable? And he found one. It’s called the halting problem, and here’s what it asks:
Suppose you have a computer program (or a Turing machine). You feed it some input. Will it eventually finish running and give you an answer? Or will it run forever, stuck in an infinite loop?
You can probably see why this matters. If you’re writing software, you’d really like to know whether your program will crash into an infinite loop. But Church (and later Turing, independently) proved that no algorithm can solve this problem in general. There is no recipe that takes any program and any input and tells you, in a finite number of steps, whether the program will eventually stop.
Let’s see why. It’s a bit like a logical puzzle, so stay with me.
Imagine someone claims to have written a program called HALT-CHECKER that solves the halting problem. You give it a program P and an input I, and it always tells you “yes, P stops on I” or “no, P runs forever on I.”
Now, using HALT-CHECKER, you write a new program called CONTRADICTION. Here’s what CONTRADICTION does: it takes a program Q as input, and it asks HALT-CHECKER what Q would do if you ran it on Q itself (that is, if you fed Q its own code as input). If HALT-CHECKER says “Q stops on Q,” then CONTRADICTION enters an infinite loop. If HALT-CHECKER says “Q runs forever on Q,” then CONTRADICTION stops immediately.
Now here’s the mind-bender: What happens when you run CONTRADICTION on itself? You feed CONTRADICTION’s own code into CONTRADICTION. So CONTRADICTION asks HALT-CHECKER: “Does CONTRADICTION stop when given CONTRADICTION as input?”
If HALT-CHECKER says “yes, it stops,” then CONTRADICTION enters an infinite loop—so it doesn’t stop. If HALT-CHECKER says “no, it runs forever,” then CONTRADICTION stops immediately—so it does stop. Either way, HALT-CHECKER is wrong.
This means HALT-CHECKER couldn’t have existed in the first place. No program can solve the halting problem for all possible programs.
This is Church’s great discovery: there are limits to what computation can do. Not practical limits, like “we don’t have enough time or memory.” Theoretical limits. Impossibilities that follow from the very nature of what computation is.
What This Means
This isn’t just an abstract puzzle. It tells us something deep about knowledge and about ourselves. The halting problem isn’t solvable, and neither are many other problems that mathematicians care about. There’s a whole family of “undecidable” problems—questions that can be stated clearly and have definite answers, but that no algorithm can answer.
This was a shock when Church and Turing first showed it. Before 1936, many mathematicians had hoped that every well-posed mathematical problem could, in principle, be solved by some mechanical procedure. The dream was to build a machine that could prove any true theorem. Church and Turing showed that this dream is impossible. There will always be true statements that are beyond the reach of any algorithm.
But here’s the strange twist: humans can see that the halting problem is unsolvable. We just gave a proof, using ordinary reasoning, that no algorithm can solve it. So in a way, human reasoning can surpass what any algorithm can do—at least in this one case. We can stand outside the system and see things that the system itself cannot see.
Does that mean human minds are not just complicated computers? Philosophers still argue about this. Church himself thought that the lambda calculus captured everything that a human mathematician does when computing in the mechanical sense. But computing in that sense is just following rules. Are there kinds of thinking that go beyond rule-following? That’s an open question.
Still Open
Philosophers and computer scientists still debate whether Church’s Thesis is exactly right. Most accept it as true, but some argue that we might eventually discover a kind of computation that the standard definitions don’t capture. (Quantum computers, for instance, can solve some problems faster than classical computers, but they can’t solve problems that are impossible for classical computers—they just solve them faster.)
Others have questioned whether the intuitive notion of “computable” can ever be completely captured by a formal definition. Maybe there will always be a gap between what we mean by “computation” and what any mathematical definition can describe.
Church’s work also connects to deeper questions about the nature of language and meaning. He argued that for a language to really work for communication, its well-formed sentences must be effectively recognizable—you need a procedure that can tell, for any string of symbols, whether it counts as a sentence. This sounds like a technical point, but it’s really about what makes communication possible. If you couldn’t tell whether something was a meaningful sentence, you couldn’t trust that communication was happening.
This gets at something important about being human. We rely on procedures and recipes all the time—not just in math, but in everyday life. We learn steps for solving problems, we trust that following the steps will get us where we need to go. Church’s work shows that this trust has limits. Some questions can’t be answered by any step-by-step method, no matter how clever. And that’s not a failure of our methods. It’s just the way the world is.
Appendices
Key Terms
| Term | What it does in this article |
|---|---|
| Algorithm | A precise, step-by-step recipe for solving a problem, guaranteed to work for all valid inputs |
| Computable function | A function whose values can be calculated by following some algorithm |
| Church’s Thesis | The claim that the mathematical definition of “computable” (lambda-definable) captures exactly what we mean by “can be computed by following a recipe” |
| Lambda calculus | A formal system invented by Church for defining functions using only symbols and rules for manipulating them |
| Halting problem | The problem of determining whether a given program will eventually stop running or run forever |
| Undecidable problem | A question that has a definite answer but cannot be answered by any algorithm |
| Turing machine | An imaginary, very simple computer invented by Alan Turing that can simulate any algorithm |
Key People
- Alonzo Church — A logician who, in the 1930s, invented the lambda calculus and proved the first major result that some problems are forever beyond the reach of computation. He was known for his careful, precise approach to fundamental questions about logic and mathematics.
- Alan Turing — A British mathematician who independently proved the same result Church did, using a different approach (the Turing machine). He went on to play a crucial role in breaking Nazi codes during World War II and in early work on artificial intelligence.
Things to Think About
-
The halting problem proof we gave works by having a program check itself. Is this a cheat? Does it show something deep about self-reference, or is it just a trick that doesn’t tell us much about real-world computation?
-
If some problems are truly unsolvable by any algorithm, what does that say about artificial intelligence? Could a really smart AI—smarter than any human—still be limited in the same way?
-
Church’s Thesis can’t be proved, only supported by evidence. How much evidence would be enough to convince you? What would it take to prove the thesis wrong?
-
We proved that the halting problem is unsolvable using ordinary human reasoning. Humans can see what algorithms cannot. Does this mean human minds are not just algorithms? Or could we program a computer to have “insights” like the one we just had?
Where This Shows Up
- Every time your computer freezes, you’re experiencing the halting problem in practical form. You don’t know if the program will eventually respond or is stuck forever—and neither does the computer.
- Website and app developers work around the halting problem every day. They can’t write a general checker for infinite loops, so they use testing and heuristics to catch common cases.
- Cryptography relies on problems that are practically hard to solve (not theoretically impossible). But Church’s results show that some problems are absolutely impossible, not just hard.
- In science, Church’s work is part of a wider discovery that there are limits to what can be known through formal methods—which touches on physics (are there limits to what we can predict?), biology (can life be fully simulated?), and even theology (are there truths that cannot be reached by any finite process?).