Counterfactuals: What *Would* Have Happened?
You’re walking home from school. You take your usual route. Nothing special happens. But later that night, you wonder: what if you had taken the other street instead? Would you have gotten home faster? Would you have run into a friend? Would something completely unexpected have happened?
These “what if” thoughts are everywhere. We use them to blame people (“If you had studied, you would have passed”), to feel relief (“If I’d been standing there, that branch would have hit me”), to make plans (“If we leave now, we’ll make it”), and to understand why things happened the way they did.
Philosophers call these counterfactuals—sentences about what would have happened if things had been different. Despite being something we say all the time, counterfactuals are surprisingly hard to pin down. What do they actually mean? How do we know if one is true? These questions turn out to be deep and strange.
The Basic Puzzle
Here’s a simple counterfactual:
If I had struck this match, it would have lit.
Seems obvious, right? But wait—a struck match won’t light without oxygen. It won’t light if it’s wet. It won’t light if the striking surface is covered in grease. It won’t light if the laws of physics suddenly stop working. The list of things that have to be true for the match to light is basically endless.
So when you say “If I had struck this match, it would have lit,” you’re secretly assuming a whole bunch of things: that oxygen is present, that the match is dry, that physics works normally, and so on. But which assumptions are you allowed to keep? You can’t keep all the facts about the actual world—because one of those facts is that the match wasn’t struck. Another is that it didn’t light. If you keep those, then even in the “what if” world the match wouldn’t light.
This is called Goodman’s problem, named after philosopher Nelson Goodman who pointed it out in 1947. The problem is: when you imagine a different world where the antecedent (the “if” part) is true, you have to decide which parts of reality to keep and which to change. But it’s surprisingly hard to do this without already knowing what counterfactuals are true—which makes the whole thing circular.
Context Matters
Consider another example, from philosopher W.V. Quine:
(a) If Caesar had been in charge during the Korean War, he would have used the atom bomb.
(b) If Caesar had been in charge during the Korean War, he would have used catapults.
You can imagine saying either of these in different conversations. If you’re talking about Caesar’s brutality, (a) makes sense. If you’re talking about what weapons were available in his day, (b) makes sense. Which counterfactual is “really” true depends on what you’re focusing on.
This suggests counterfactuals are context-sensitive—their truth depends on the situation where they’re said. But how exactly does context decide which background facts to keep and which to drop? Philosophers still argue about this.
The Strange Behavior of Counterfactuals
Counterfactuals don’t behave like ordinary “if-then” statements. In normal logic, if “If A then C” is true, then “If A and B then C” should also be true. (If “If it rains, the ground gets wet” is true, then “If it rains and the sun is shining, the ground gets wet” is also true—right?)
But counterfactuals break this pattern:
If I were an Olympic athlete, I would have won the race.
If I were an Olympic athlete but had a broken leg, I would not have won the race.
If I were an Olympic athlete and had a broken leg but were racing snails, I would have won the race.
Adding more conditions flips the truth back and forth. Philosophers call this non-monotonicity—the weird property that adding information to the “if” part doesn’t preserve the truth of the whole statement.
This also means some rules that work for ordinary logic break down for counterfactuals:
- Transitivity fails: If A > B and B > C, it doesn’t always follow that A > C.
- Contraposition fails: “If A then C” doesn’t always mean “If not C then not A.”
For example: “If J. Edgar Hoover were a communist, he would be a traitor” and “If Hoover had been born Russian, he would be a communist” might both be true. But “If Hoover had been born Russian, he would be a traitor” is false—because he was fiercely patriotic to whatever country he was actually from.
How to Think About What Would Happen
The most popular theory among philosophers is called the variably strict analysis, developed by David Lewis and Robert Stalnaker. Here’s the basic idea:
When you evaluate a counterfactual, you don’t consider all possible worlds where the antecedent is true. You only consider the closest ones—the worlds that are most similar to the actual world while still making the antecedent true. If the consequent is true in all those closest worlds, the counterfactual is true; otherwise, it’s false.
So when you ask “What would happen if I struck this match?”, you consider worlds that are as much like the actual world as possible, except the match gets struck. In those worlds, the match is dry (because it is in reality), there’s oxygen (because there is in reality), and physics works normally (because it does in reality). So the match lights.
But you don’t consider worlds where a wizard appears and casts a spell on the match, even though that’s technically possible. Those worlds are “farther away”—less similar to reality.
This explains why adding conditions can change the truth: the closest worlds where I’m an Olympic athlete don’t include me having a broken leg. But the closest worlds where I’m an athlete and have a broken leg do—and in those worlds, I lose.
The Problem with “Similarity”
This all sounds good until you ask: similar in what way? Similarity is slippery. New York and San Francisco are similar in some ways (big cities, expensive) but not others (climate, geography). What counts as “similar” for counterfactuals?
Philosopher Kit Fine came up with a famous objection. Imagine President Nixon had a button that would launch nuclear missiles at Russia. He never pushes it. So we’d say:
If Nixon had pressed the button, there would have been a nuclear war.
But according to the similarity theory, this might be false. Consider a world where Nixon presses the button, but a tiny short-circuit miraculously prevents the missiles from firing. That world seems more similar to the actual world (where there was no nuclear war) than one where millions die. So the counterfactual should be false—but it clearly isn’t.
Lewis responded by creating a complicated system for weighting different kinds of similarity: avoiding violations of physical law matters more than preserving specific facts, for example. But many philosophers think this gets too messy. The similarity approach seems to need a whole theory of what matters in the universe just to handle something as simple as “what if.”
Other Approaches
Some philosophers try to avoid similarity entirely by using more objective tools.
Probabilistic analyses say counterfactuals are about chance: “If A, then C” means the probability of C given A was high. This connects counterfactuals to science, where probabilities play a central role. But it faces its own problems. According to quantum mechanics, there’s a tiny chance that every air molecule in your room could suddenly tunnel outside. So when you say “If I were in my office, I would breathe normally,” quantum mechanics says that’s not certain—there’s a tiny chance you’d suffocate. Does that make the counterfactual false? Some philosophers (called counterfactual skeptics) say yes—most ordinary counterfactuals are technically false, because there’s always some weird quantum possibility. Most people find this conclusion hard to accept.
Interventionist semantics uses ideas from computer science and AI. It treats counterfactuals like experiments: to check “If A, then C,” you imagine “intervening” to make A true, cutting A off from its normal causes, and seeing what happens to C. This is especially useful for causal reasoning—if I want to know whether taking an aspirin would cure my headache, I imagine intervening to make myself take the aspirin and seeing what happens. This approach is actually used in self-driving cars and climate science to make predictions.
What About Impossible Counterfactuals?
Here’s a weird one: “If Hobbes had squared the circle, he would have been famous.” Squaring the circle is logically impossible—it can’t be done. So technically, there are no possible worlds where the antecedent is true. Does that make the counterfactual false? Or automatically true (since there are no counterexamples)?
Most standard theories say it’s automatically, vacuously true. But that seems wrong: “If Hobbes had squared the circle, he would have cured world hunger” also comes out true, which it shouldn’t. These are called counterpossibles, and they’re a hot debate. Some philosophers think we need to include impossible worlds in our theories. Others think we can explain our judgments away with psychology. Nobody’s sure yet.
Why This Matters
Counterfactuals aren’t just a philosophical puzzle. They’re at the heart of how we think about:
- Causation: What does it mean to say something caused something else? Many philosophers think causation just is a counterfactual: “A caused B” means “if A hadn’t happened, B wouldn’t have happened.”
- Knowledge: Do you really know something if you could easily have been wrong? Some philosophers think knowledge requires “safety”—that you wouldn’t believe something false in close possible worlds.
- Free will: When someone says “you could have done otherwise,” they’re making a counterfactual claim. Whether that claim can be true in a determined universe is one of the biggest debates in philosophy.
- Decision-making: When you decide what to do, you’re comparing counterfactuals: “What would happen if I did X vs. Y?”
We use counterfactuals constantly, without thinking about how strange they are. Philosophers have spent decades trying to pin them down, and the debate is still wide open. Next time you catch yourself wondering “what if…”, you’ll be in good company.
Appendix: Key Terms
| Term | What it does in this debate |
|---|---|
| Counterfactual | A statement about what would happen if things were different from how they actually are |
| Antecedent | The “if” part of a counterfactual (“If I had struck the match…”) |
| Consequent | The “then” part (“…it would have lit”) |
| Non-monotonicity | The weird property that adding more conditions to the “if” part can change the truth of the whole statement |
| Similarity | The idea that some possible worlds are “closer” to reality than others, used to decide which ones matter |
| Possible world | A complete way things could have been; a tool for thinking about what’s possible |
| Counterpossible | A counterfactual whose antecedent is impossible (like “If Hobbes had squared the circle…”) |
Key People
- Nelson Goodman — An American philosopher who first pointed out the problem of deciding which background facts to keep when evaluating counterfactuals.
- David Lewis — A hugely influential philosopher who developed the most popular theory of counterfactuals based on similarity between possible worlds.
- Robert Stalnaker — Developed a similar theory at the same time as Lewis, with slightly different details about how “closest” worlds work.
- Kit Fine — A philosopher who pointed out a famous problem with the similarity approach (the Nixon button example).
Things to Think About
-
The butterfly effect. If you changed one tiny thing about the past—say, you woke up one minute later this morning—would the entire history of the universe be different? If so, how can any counterfactual about your life be true? The “closest possible world” where you wake up late might be completely unrecognizable.
-
Moral responsibility. We blame people for things they did, partly based on counterfactuals: “If they had chosen differently, they wouldn’t have hurt anyone.” But if the universe is deterministic—if everything that happens is caused by what came before—can those counterfactuals ever be true? Does that mean nobody is really responsible?
-
Your own counterfactuals. Think of a big decision you made recently. Now imagine you chose differently. Can you actually picture what would have happened? How do you decide which details to keep the same and which to change? Try to notice what criteria you’re using unconsciously.
-
The quantum objection. If quantum mechanics means there’s always a tiny chance something weird could happen, are most counterfactuals actually false? Or should we ignore those tiny possibilities? Where do you draw the line between “possible enough to matter” and “too weird to consider”?
Where This Shows Up
- History and crime investigations — “What if this had happened instead?” is the basic logic of figuring out causes after the fact.
- Sports commentary — “If he hadn’t slipped, he would have scored” is a counterfactual. Commentators argue about these constantly without realizing they’re doing philosophy.
- Video games with branching stories — Games that let you make choices and see different outcomes are basically machines for generating counterfactuals.
- Blame and praise — When you say “You should have…” or “If only I had…”, you’re making counterfactual claims about responsibility.
- Regret — Feeling regret means believing that a different past would have led to a better present. That’s a counterfactual claim with emotional weight.