How Hard Is a Problem? A Short Guide to Computational Complexity
Imagine you’re playing a game where you have to visit every city on a map exactly once and then return home, trying to keep your total distance as short as possible. This is called the Traveling Salesman Problem. For three cities, it’s trivial. For ten, you could probably figure it out with some patience. For a hundred cities, you’d be at it longer than the universe has existed if you tried to check every possible route. And yet, nobody knows for sure whether there’s a clever shortcut that would let you solve it instantly, no matter how many cities there are.
That’s the kind of puzzle computational complexity theory deals with. It’s not about whether a problem can be solved—most of the ones we’ll talk about can be, in principle. It’s about whether it can be solved efficiently, with a reasonable amount of time and memory.
The Basic Idea: Big vs. Small Problems
Think about two very different tasks. First: given two numbers, say 1,234 and 5,678, do they share any common factor other than 1? You can probably figure this out quickly using a method Euclid invented over 2,000 years ago. The bigger the numbers get, the more steps you need, but the number of steps grows slowly—roughly proportional to the number of digits in the numbers. A 100-digit number might take a few hundred steps. That’s manageable.
Now consider a different task: given a number, find its prime factors. For example, factor 15 into 3 × 5. Easy, right? But try factoring a number with 200 digits. The most efficient method we know would take longer than the age of the universe on any computer we can build today. The problem isn’t that we don’t have fast enough computers—it’s that the number of steps grows explosively as the input gets larger. Double the number of digits, and the work doesn’t double—it multiplies by itself.
This contrast is the heart of complexity theory. Some problems are feasible—they can be solved in practice for the kinds of inputs we actually care about. Others are intractable—they might be solvable in principle, but no efficient method exists (or we haven’t found one yet).
Measuring Difficulty: The Growth of Work
Here’s where it gets technical, but the core idea is simple. When we measure how hard a problem is, we don’t just count the number of steps for one specific input. We look at how the number of steps grows as the input gets bigger. This is called the rate of growth.
Suppose you have an algorithm that takes 100 steps to handle an input of size 10, and 400 steps for size 20. That’s a quadratic rate—the steps grow like the square of the input size. Mathematicians write this as O(n²). If it took 1,024 steps for size 10 and over a million for size 20, that’s exponential growth—O(2ⁿ). The difference between polynomial growth (n, n², n³…) and exponential growth (2ⁿ, 3ⁿ…) is enormous. For an input of size 100, a quadratic algorithm might take 10,000 steps—trivial for a computer. An exponential algorithm would take more steps than there are atoms in the universe.
The Cobham-Edmonds Thesis, which is widely accepted among computer scientists, says that a problem is feasibly solvable exactly when there’s an algorithm whose running time grows polynomially with the input size. Problems solvable in polynomial time belong to the class called P.
The Class That Drives Everyone Crazy: NP
Here’s where things get weird. There’s another class of problems called NP (non-deterministic polynomial time). These are problems where, if someone hands you a proposed solution (a “certificate”), you can check whether it’s correct in polynomial time. But actually finding the solution might be much harder.
Think about a Sudoku puzzle. If someone shows you a completed grid, you can check pretty quickly whether it follows the rules. But solving it from scratch takes much more work. In fact, Sudoku (generalized to any size board) is in NP. The Traveling Salesman Problem is also in NP—if someone claims they’ve found a tour of 500 miles, you can check their math, but finding that tour yourself is another matter.
Here’s the crucial puzzle: nobody knows whether P equals NP. Is every problem whose solution can be checked quickly also solvable quickly? Or are some problems inherently harder to solve than to verify? Most computer scientists believe P ≠ NP—that there are problems in NP that can’t be solved in polynomial time. But nobody has proven it. This is the most important open question in theoretical computer science, and one of the Clay Mathematics Institute’s million-dollar Millennium Problems.
The Hardest Problems in NP: Completeness
Some problems in NP are special: they’re NP-complete. This means that if you could solve any one of them quickly, you could solve every problem in NP quickly. They’re the “hardest” problems in the class.
How do we know this? Through reductions. If you can show that solving problem A is at least as hard as solving problem B, and you know B is hard, then A is probably hard too. For example, the Traveling Salesman Problem is NP-complete. So is the Satisfiability Problem—given a logical formula with ANDs, ORs, and NOTs, can you assign truth values to make it true? So is finding a Hamiltonian path in a graph—a route that visits every vertex exactly once.
There are thousands of NP-complete problems, coming from every area of mathematics and computer science. If P ≠ NP (as most believe), then none of them have efficient algorithms. This isn’t just an abstract concern—it affects real things. The security of many encryption systems, for instance, depends on the assumption that factoring large numbers (which is in NP) is hard.
Beyond P and NP
The landscape is richer than just P and NP. There’s PSPACE, problems solvable with a reasonable amount of memory but possibly an unreasonable amount of time. Some games, like generalized versions of Go and chess with certain rule modifications, are PSPACE-complete. Then there’s EXP, problems that require exponential time even in the best case. And there are whole hierarchies of classes in between, like the polynomial hierarchy.
There’s also BPP, problems solvable by algorithms that can flip coins—probabilistic algorithms. If you can solve a problem correctly 99.999% of the time using randomness, that might be just as good as a guaranteed method. And quantum computing introduces BQP, problems solvable efficiently by quantum computers, like factoring (thanks to Shor’s algorithm).
Each of these classes gives us a different lens for understanding what makes a problem hard. But many of the relationships between them—like whether P = NP, or whether BPP = P—are still open questions.
Why This Matters Outside Computer Science
Complexity theory has surprising connections to logic, epistemology, and even the philosophy of mathematics. One example: the problem of logical omniscience. Traditional theories of knowledge say that if you know something, you know all its logical consequences. But there are infinitely many logical consequences of any statement, and some of them have proofs so long that no human could ever check them. Complexity theory shows us that even simple logical reasoning can be intractable—so the idealized “knowing everything” of traditional epistemology is impossible for any real agent.
Another connection is to the foundations of mathematics. Kurt Gödel once wrote a letter suggesting that if P = NP, mathematical creativity could be automated—we could have computers that find proofs as well as any human. This was partly because the problem of whether a mathematical statement has a proof of a given length is in NP. If checking proofs is as easy as finding them, mathematics becomes a very different enterprise.
There’s even a school of thought called strict finitism that questions whether very large numbers really exist—or at least whether we can meaningfully talk about them. A strict finitist might say that 10¹² doesn’t denote a genuine number because we can never count to it. Complexity theory gives some tools for thinking about these issues, though not everyone agrees on what they show.
What We Still Don’t Know
The most striking thing about computational complexity is how much we don’t know. We can’t prove that any of the thousands of NP-complete problems we’ve studied actually lacks an efficient algorithm. We can’t prove that quantum computers can do more than classical ones. We can’t prove that randomness helps. The evidence for these beliefs is strong—decades of failure to find efficient algorithms, heuristic arguments, and structural results—but it’s all circumstantial.
For a smart 12-year-old, here’s the takeaway: we live in a world where some problems are easy and some are hard, and we suspect that the difference between easy and hard has a deep mathematical structure that we’re only beginning to understand. The questions are right at the boundary of what we know how to ask, let alone answer. And the strange thing is that these questions aren’t about the physical world—they’re about abstract problems, about logic and mathematics itself. They’re about what can and cannot be figured out, given finite time and finite minds.
Appendix: Key Terms
| Term | What it does in this debate |
|---|---|
| Algorithm | A step-by-step procedure for solving a problem |
| Polynomial time | Running time that grows like n² or n³—the standard for “feasible” |
| Exponential time | Running time that grows like 2ⁿ—the standard for “intractable” |
| P | The class of problems solvable in polynomial time |
| NP | The class of problems whose solutions can be checked in polynomial time |
| NP-complete | The hardest problems in NP—if you solve one, you solve them all |
| Reduction | A way of showing that one problem is at least as hard as another |
| Certificate | A proposed solution that can be checked quickly |
Appendix: Key People
- Alan Turing — British mathematician who defined what computation means and helped prove some problems are fundamentally unsolvable
- Stephen Cook — Showed that the Satisfiability Problem is NP-complete, launching the study of NP-completeness
- Richard Karp — Proved 21 different problems were NP-complete, showing how widespread the phenomenon is
- Kurt Gödel — Austrian logician who anticipated the P vs. NP question in a letter, suggesting it would change mathematics
- Leonid Levin — Independently discovered NP-completeness around the same time as Cook
Appendix: Things to Think About
-
If P = NP were proven true, what would happen to encryption systems that depend on the difficulty of factoring? Could there be some other way to keep secrets?
-
Some problems in NP are easy to solve for most real-world instances, even if they’re hard in the worst case. Does “worst-case hardness” matter if the hard cases never come up?
-
If you could check a math proof instantly but couldn’t find one yourself, would that count as “knowing” the theorem? What does this say about the difference between verification and discovery?
-
Quantum computers can factor numbers quickly (using Shor’s algorithm), but nobody knows if they can solve NP-complete problems quickly. Why might quantum physics change what’s computable?
Appendix: Where This Shows Up
- Cryptography: Your passwords, credit card numbers, and private messages online are protected by the assumption that certain problems (like factoring) are hard
- Video games: Pathfinding, resource allocation, and AI opponents all involve complexity calculations
- Logistics: Airlines scheduling flights, delivery companies routing trucks, and factories scheduling production all face NP-hard problems
- Scientific computing: Protein folding, drug discovery, and climate modeling involve problems at the edge of feasibility
- Everyday reasoning: Even figuring out whether a set of statements is consistent—something we do constantly—is NP-complete in the general case