Can Everything Be Computed? What We Can and Can't Calculate
Imagine you have a computer that can do anything—not just run apps or play games, but solve any problem you can think of. You type in a question, and no matter how hard it is, the computer eventually gives you the right answer. Would that be possible?
A group of mathematicians in the 1930s—before anyone had actually built a computer—wanted to know exactly what could and couldn’t be computed. They invented different definitions of “computable,” and surprisingly, they all turned out to be equivalent: anything that could be computed in one system could be computed in all the others. This is now called the Church-Turing Thesis, and it’s one of the foundations of computer science.
But here’s the strange thing these mathematicians discovered: there are problems that are impossible to solve with any computer, no matter how powerful. Not just hard or time-consuming—actually impossible. And there are problems that are theoretically solvable but would take longer than the universe has existed to finish.
Let’s explore what we can and can’t compute.
The Machine That Changed Everything
Alan Turing, a British mathematician, invented what we now call a Turing machine in 1936. It wasn’t a real machine—it was a thought experiment. But it captured something essential about computation.
Here’s how Turing imagined it: a machine with a long tape divided into cells, each cell containing a symbol (like a 0, 1, or blank). The machine has a read/write head that can look at one cell at a time, and it can be in one of a finite number of “states.” Based on what state it’s in and what symbol it sees, it follows a rule: write a new symbol, move left or right, and enter a new state.
That’s it. That’s the whole machine. It sounds incredibly simple, but Turing proved that this machine can compute anything any other computing device can compute. And here’s the really important part: he showed that you can build a universal Turing machine—a single machine that can run any program. This is the theoretical basis for every computer you’ve ever used. Your laptop isn’t a different machine when you play a game versus when you write an essay; it’s the same machine running different programs.
Turing’s insight was that one machine can do anything any other machine can do, just by being given the right instructions.
The Problem That Can’t Be Solved
Now for the strange part. Because Turing machines can do any computation, some of them get stuck in infinite loops—running forever without ever finishing. You could write a program that keeps checking numbers looking for a pattern that doesn’t exist, and it would never stop.
This raises an obvious question: Could you write a program that looks at another program and tells you whether it will eventually stop or run forever? This is called the Halting Problem.
Turing proved that you can’t. No program can correctly determine whether every other program will halt. And he proved this using a clever trick called a diagonal argument.
Here’s the idea. Imagine you had a program H that could decide if any other program would halt on a given input. H takes two numbers: a program number and an input. It outputs “yes” if that program would eventually halt on that input, and “no” if it would run forever.
Now let’s build a new program D that does something tricky. D takes a program number n, runs H(n, n) (asking whether program n would halt when given itself as input), and then does the opposite. If H says “yes, it halts,” then D goes into an infinite loop. If H says “no, it doesn’t halt,” then D immediately stops.
What happens when we ask H about D on input D? Think about it: If H says D(D) halts, then D (by its definition) goes into an infinite loop—so H was wrong. If H says D(D) doesn’t halt, then D (by its definition) stops immediately—so H was wrong again.
This is a contradiction. H can’t exist. No program can decide the halting problem.
This isn’t just a weird theoretical trick. It proves that there are fundamental limits to what computers can do. Some questions are simply unanswerable by any computational process.
What Does It Mean for a Problem to Be “Hard”?
Now we know about problems that are impossible. But what about problems that are possible but very, very hard?
Think about the difference between:
- Adding up a list of numbers (easy)
- Figuring out whether some subset of a list of numbers adds up to exactly a target number (hard)
The second problem is called Subset Sum. If I give you the numbers {3, 7, 12, 25, 42} and ask if any subset adds up to 50, you could check: 3+7+12+25=47, no. 3+7+12+42=64, no. 7+12+25+42=86, no. 3+25+42=70, no. 3+12+25+42=82, no. 7+25+42=74, no. 3+7+42=52, no. 3+7+25=35, no. 3+12+25=40, no. 7+12+25=44, no. 3+7+12+25+42=89, no. Wait, did I check all possibilities? With just 5 numbers, there are 32 possible subsets. With 100 numbers, there are more than a million trillion trillion possibilities—more than the number of atoms in the universe.
Computer scientists group problems by how hard they are to solve as the inputs get bigger. The class P contains problems that can be solved in “polynomial time”—basically, problems where doubling the input size doesn’t make them catastrophically harder. Addition, sorting, and finding the shortest path on a map are all in P.
The class NP contains problems where, if someone gives you the answer, you can check it quickly. Subset Sum is in NP: if someone says “the subset {3, 12, 35} adds up to 50,” you can add 3+12+35=50 and confirm in a second. But finding that answer in the first place is hard.
Here’s the million-dollar question: Is every problem in NP also in P? In other words, if you can check an answer quickly, can you also find the answer quickly? Nobody knows. This is the P vs NP problem, and it’s one of the biggest unsolved questions in computer science. The Clay Mathematics Institute offers a million-dollar prize for solving it.
Most computer scientists believe P is not equal to NP—that some problems are genuinely hard to solve, even though their answers are easy to check. But nobody has proved this.
The Strange Universe of Complexity
Here’s what we do know. There are classes within classes:
- P (easy to solve)
- NP (answers easy to check, maybe easy to solve—we don’t know)
- PSPACE (problems solvable with a reasonable amount of memory, even if they take forever)
- EXPTIME (problems solvable in exponential time—which means they’re definitely not practical)
We know that P is contained in NP, NP is contained in PSPACE, and PSPACE is contained in EXPTIME. But we don’t know if any of these containments are strict. It’s possible—though experts think it’s unlikely—that P = EXPTIME, meaning every problem solvable in any amount of time at all is actually solvable quickly.
What makes this all even stranger is that many natural problems turn out to be “complete” for these classes. A NP-complete problem is the hardest problem in NP—if you could solve it quickly, you could solve every problem in NP quickly. Thousands of important practical problems—from scheduling to circuit design to protein folding—are NP-complete.
Why This Matters
You might think: “So what if some problems are hard? Computers keep getting faster.” But here’s the thing: faster computers can’t help with truly hard problems. If a problem takes 2^n steps (exponential time), and n is 100, then even a computer a trillion times faster than today’s best still couldn’t finish the calculation in a human lifetime.
This means that for some problems, we have to give up on exact answers and settle for approximations. If you can’t find the perfect solution to a scheduling problem, maybe you can find one that’s 95% good. The halting problem means we can’t automatically tell whether every program will run forever—but in practice, we can often figure it out for the programs that people actually write.
The deeper lesson is that computation has limits that aren’t about technology. They’re about logic itself. Some questions can’t be answered by any computer, no matter how fast. Some questions can be answered, but not in any reasonable time. Understanding these limits helps us know what problems to work on, and what kinds of solutions to look for.
There’s something beautiful about this: before computers even existed, mathematicians figured out exactly what computers could and couldn’t do. The limits aren’t engineering problems we’ll eventually overcome. They’re built into the fabric of mathematics—and that’s a strange and wonderful thing to discover.
Appendices
Key Terms
| Term | What it means in this debate |
|---|---|
| Turing machine | A simple imaginary machine that captures the idea of “any possible computation” |
| Universal Turing machine | A single machine that can run any program—the theoretical basis for modern computers |
| Halting problem | The question of whether a program will eventually stop or run forever; Turing proved it’s unanswerable in general |
| P | Problems that can be solved in a reasonable amount of time, even as inputs get large |
| NP | Problems where you can check an answer quickly, but finding it might be hard |
| NP-complete | The hardest problems in NP; if you could solve one quickly, you could solve them all |
| Diagonal argument | A clever logical trick that proves something can’t exist by making it contradict itself |
Key People
- Alan Turing (1912–1954): A British mathematician and codebreaker who invented the Turing machine, proved the halting problem unsolvable, and helped crack German codes during World War II.
- Alonzo Church (1903–1995): An American mathematician who independently proved that the decision problem for mathematics is unsolvable, using a different approach from Turing.
- David Hilbert (1862–1943): A famous German mathematician who believed all mathematical problems could be solved by an algorithm; the discovery that he was wrong shocked the mathematical world.
Things to Think About
-
The halting problem says you can’t write a program that checks every other program. But most programs you write (or that exist in the world) can be checked. What’s the difference between “impossible in general” and “impossible for everything”? Are there other things in life that are impossible in general but work most of the time?
-
If P = NP, then finding answers is no harder than checking them. This would mean that creativity (finding answers) is as easy as verification (checking answers). Do you think that’s true? Or is there something genuinely harder about coming up with an answer than checking one?
-
Subset Sum is easy when the numbers are small and hard when they’re large. Is this a problem about the numbers or about our brains? If we were somehow smarter, would Subset Sum become easy? Or is it genuinely, objectively hard?
Where This Shows Up
- Password security: Most encryption relies on the fact that factoring large numbers is hard (it’s in NP but probably not in P). If P = NP, much of modern security would break.
- Video game design: Pathfinding algorithms (how characters navigate around obstacles) are in P—that’s why your game characters don’t walk into walls.
- Your school schedule: The problem of assigning students to classes, teachers to rooms, and times to everything is often NP-complete. Schools use approximate solutions that work well enough.
- Protein folding: Figuring out how proteins fold into their final shapes is NP-complete. This is why we use supercomputers and clever approximations instead of exact solutions.