What Is a Cause? (And How Do We Know When Something Really Caused Something Else?)
Imagine you’re playing a video game. You press a button, and your character jumps. Did pressing the button cause the jump? Usually, yes. But what if there’s a glitch, and the character sometimes jumps even when you don’t press anything? Or what if two buttons both make the character jump, but only the first one pressed actually counts? These aren’t just game-design problems. They’re versions of a puzzle philosophers have been wrestling with for centuries: what does it mean for one thing to cause another?
This article is about the metaphysics of causation—not the science of how causes work (that’s physics), but the deeper question of what causation is. It turns out to be surprisingly hard to say.
What Kind of Things Can Be Causes?
Let’s start with something simple. You’re at school, and your friend trips you. That made you fall. The tripping caused the falling. But what exactly was the cause? The event of your friend sticking out their foot? The fact that they did it on purpose? The whole situation?
Philosophers disagree about what kinds of things can be causes and effects. The most popular view is that causes and effects are events—things that happen at particular times and places. Your friend’s foot moving is an event. You hitting the ground is an event. Simple enough.
But consider this: you accidentally knock over a bottle of ink, and it stains the carpet. Your reaching for your cigarettes caused the stain. But “reaching for cigarettes” isn’t a type of event that usually causes carpet stains. So this is a token cause—a one-time thing, not a general rule.
This brings us to a tricky question. How fine-grained should we be when we talk about causes? Suppose a metal ball is spinning on a hotplate. The ball heats up. The ball also rotates. These happen at exactly the same time and place. Are they the same event? If you think events are just chunks of spacetime, then yes—the heating is the rotation. But they seem to have different causes. The hotplate caused the heating but not the rotation. The spin caused the rotation but not the heating. So maybe they’re different events after all.
This is where things get weird. Philosopher Donald Davidson suggested that we should individuate events by their causes and effects. If two events have different causes or different effects, they’re different events. That seems to solve the ball problem. But it also means we can’t talk about what causes and effects are without already knowing something about causation. That’s circular—and philosophers hate circular explanations.
The Problem of Absences
Here’s another puzzle. Your plant died because you forgot to water it. Did your failure to water it cause its death? That seems natural to say. But a failure isn’t really an event—it’s a non-event, an absence. How can something that didn’t happen be a cause?
Some philosophers say absences can’t be causes. The plant died because of what did happen (like the soil drying out), not because of what didn’t happen. Others say that’s too strict. If your friend promises to water your plant and doesn’t, while the neighbor next door also doesn’t water it, we naturally say the friend’s failure caused the death, not the neighbor’s. But both didn’t water it. The difference seems to be about what was expected or promised—not about any physical process.
This connects to a bigger question: is causation something truly “out there” in the world, or does it depend partly on what we notice, expect, or care about?
Preemption: When Causes Have Backups
Here’s where causation gets really slippery. Suppose two people, Suzy and Billy, both want to break a window. Billy throws his rock first. It hits the window, and it shatters. Suzy was about to throw hers, but now she doesn’t need to. Billy’s throw caused the shattering. Suzy’s throw was a backup—a potential cause that was “preempted.”
Now, here’s the puzzle: the window would have shattered either way. So the shattering didn’t depend on Billy’s throw. If he hadn’t thrown, Suzy would have thrown, and the window still would have shattered. Yet we still want to say Billy’s throw was a cause. This shows that causes don’t have to be necessary for their effects. Something can be a cause even if the effect would have happened without it.
This is a huge problem for any theory that says “c causes e” means “if c hadn’t happened, e wouldn’t have happened.” That’s called a counterfactual theory of causation, and it’s very appealing—except cases like preemption show it’s wrong. Billy’s throw was a cause, but if it hadn’t happened, the window would still have shattered (thanks to Suzy). So the counterfactual test fails.
Philosophers have spent decades trying to fix this. Some say the key is that preemption cuts the backup early—before the effect happens. Billy’s throw prevented Suzy from throwing, so the backup chain was broken. If we look at what would have happened if Billy hadn’t thrown and everything else stayed the same, we might get a different answer. But figuring out what “everything else stays the same” means is surprisingly hard.
Prevention and Double Prevention
Now consider a different scenario. James Bond shoots down a missile heading toward a villain’s compound. The villain survives. Bond prevented the villain’s death. Did Bond cause the villain to survive? That sounds right. But what if we say that Bond caused something not to happen (the death)? Then we’re back to the problem of absences as effects.
Worse, consider double prevention. Bond shoots down a missile that was going to destroy a computer that was controlling a cyber attack. By preventing the missile from hitting the computer, Bond inadvertently causes the cyber attack to succeed. There’s no physical process connecting Bond to the cyber attack—no energy transfer, no chain of collisions. Yet it seems like Bond caused it.
Cases like these make some philosophers think there are two different kinds of causation: one that involves physical processes (like pushing, pulling, or transferring energy), and another that involves more abstract relationships (like preventing a preventer). This is still a live debate.
Switches and Transitivity
Here’s another head-scratcher. A lamp has two bulbs, left and right. A switch determines which bulb gets power. Your friend flips the switch to the right, and you turn on the power. The right bulb lights up. The room is illuminated.
Your friend flipping the switch caused the right bulb to light. The right bulb lighting caused the room to be illuminated. So, if causation is transitive (if A causes B and B causes C, then A causes C), then your friend flipping the switch caused the room to be illuminated. But that seems wrong. Your friend only determined which bulb lit up, not whether it lit up. You turning on the power is what really caused the illumination.
This challenges the idea that causation is transitive. Some philosophers say it is, and we just have to accept that your friend was a cause (even if a minor one). Others say causation isn’t always transitive—sometimes the chain breaks when the intermediate effect doesn’t really “make a difference” to the final outcome.
Normality and What Counts as a Cause
Remember the plant-watering example? Your friend promised to water your plant and didn’t. Your other neighbor also didn’t water it. Most people say the friend’s failure caused the death, not the neighbor’s. Why? Because it was abnormal for the friend to fail, but normal for the neighbor.
This suggests that our judgments about causation depend partly on what we consider normal or expected. In some cases, this seems purely psychological—just a quirk of how humans think. But some philosophers argue that normality is actually built into the causal relation itself. On this view, what counts as a cause depends objectively on what counts as a “default” state of the system.
Consider: a match lights when you strike it. But the match needs oxygen to light. Why do we say the striking caused the lighting, but not the oxygen? Because oxygen is normally present. If we were aliens from a planet without oxygen, we might point to the oxygen as the cause. The traditional view says this is just about what we find interesting or surprising—not an objective fact about causation.
But here’s a challenge to that view. Compare two neuron systems (neuroscientists use simple diagrams of neurons that either fire or don’t). In one system, a neuron fires and causes another to fire, while a backup neuron is preempted. In another system, the same mathematical structure represents a “short circuit” where a neuron fires but accomplishes nothing. These systems are mathematically identical. Yet we want to say the first is a case of causation and the second is not. The only difference seems to be which states are “default” or “normal.” This suggests normality isn’t just a psychological bias—it’s doing real work in determining what causation is.
Type vs. Token Causation
So far we’ve mostly discussed token causation—particular events causing particular effects. But there’s also type causation: “smoking causes cancer” or “boredom causes daydreaming.” These are general claims about kinds of things.
Some philosophers think type causation is just a generalization about token causation. “Smoking causes cancer” means that most cases of cancer are caused by past smoking. But this runs into problems. “Drinking a quart of plutonium causes death” is true, even though nobody has ever done it. So it can’t be a generalization about actual token cases.
Others think type causation is more fundamental. On this view, what makes it true that your smoking caused your cancer is, partly, that smoking generally causes cancer. The token case is an instance of the type.
There’s also a middle view: neither type nor token causation is more fundamental. They’re just different things we can analyze in parallel.
What Makes a Causal Model Correct?
Philosophers (and scientists) often represent causal relationships using mathematical models—systems of equations that show how variables influence each other. For example, if X causes Y, we might write Y = X + something. These models are useful for prediction and for thinking about what would happen under different circumstances.
But what does it mean for such a model to be correct? One answer: the model is correct if a certain set of counterfactuals are true. If the model says “if you intervened to change X, Y would change accordingly,” then the model is correct just in case that counterfactual is actually true.
This seems promising, but it’s circular in an interesting way. To say what counts as an “intervention” on X, we need to already understand causation. An intervention isn’t just any change in X—it’s a change that comes from outside the system, that doesn’t affect X through the normal causal pathways. So we can’t use interventions to define causation without already having some grip on what causation is.
The philosopher James Woodward has developed a detailed account of intervention that tries to break out of this circle. His definitions use the notion of influence (which is causal) but don’t use influence between the specific variables we’re analyzing. So the account is non-reductive—it doesn’t tell us what causation is in non-causal terms—but it’s not viciously circular. It shows how different causal concepts hang together.
Why This Matters
You might be thinking: “This is interesting, but does it matter? We know that hitting a window breaks it, even if we can’t define causation perfectly.”
That’s true. But the puzzles we’ve been exploring show up in real life. In court cases, judges and juries have to decide what caused an injury or a loss. In medicine, doctors need to know whether a drug caused a side effect or just happened before it. In everyday arguments, we blame people for things they “caused” even when other factors were also at play.
The philosophy of causation doesn’t give us easy answers. But it shows us that our everyday talk about causes is more complicated than it seems. When you say “that caused this,” you’re making a claim that philosophers have been arguing about for centuries. And that’s exactly why it’s worth thinking about.
Appendices
Key Terms
| Term | What it does in this debate |
|---|---|
| Token causation | A particular event or fact causing another particular event or fact |
| Type causation | A general claim that one kind of thing causes another kind of thing |
| Preemption | When one potential cause “beats” another to produce an effect, even though the other would have caused it |
| Prevention | When a cause stops an effect from happening |
| Double prevention | When a cause prevents something that would have prevented an effect—so it indirectly brings the effect about |
| Counterfactual | A “what if” statement: “if X hadn’t happened, Y wouldn’t have happened” |
| Intervention | A way of changing one variable without affecting the causal paths that lead to it |
| Transitivity | The idea that if A causes B and B causes C, then A causes C |
| Relata | The things that stand in a relation—in this case, causes and effects |
| Normality | What’s considered default or expected in a situation; some think it’s part of what causation is |
| Absence | Something that doesn’t happen being treated as a cause or effect |
Key People
- Donald Davidson — American philosopher who argued that events are individuated by their causes and effects, and that causation is a relation between events
- David Lewis — American philosopher who developed a counterfactual theory of causation and a fine-grained theory of events as properties of spacetime regions
- James Woodward — American philosopher who developed a “manipulationist” or “interventionist” account of causation, non-reductive but informative
- Jonathan Schaffer — Contemporary philosopher who defended contrastivism (causation as a four-place relation) and introduced “trumping preemption” cases
Things to Think About
-
You’re on a jury. Someone is accused of causing a death. They pushed the victim, who fell and hit their head. But the victim also had a rare medical condition that made them likely to die from a minor bump. Did the push cause the death? How would you decide?
-
If causation depends partly on what’s “normal,” then what counts as normal? Normal for whom? Does this mean causation is relative to a culture or a perspective?
-
Suppose you could time travel and prevent your own birth. Would that be causation? Or does causation require that causes come before effects? Could something cause something that happened before it?
-
Two people both shoot at a victim. The first bullet kills. The second bullet also would have killed. Is the first shooter a cause of death? Is the second? What if we know the second bullet was fired before the victim died—would that change your answer?
Where This Shows Up
- Law and criminal trials — Courts constantly argue about causation: Did a company’s negligence cause an injury? Did a doctor’s mistake cause a death? The “but for” test (would the harm have happened anyway?) is a version of the counterfactual theory.
- Medicine and drug safety — When a patient gets a side effect, doctors need to know whether the drug caused it or whether it would have happened anyway. This is a real version of the preemption problem.
- Everyday blame and responsibility — When you say “you made me late” or “that’s why I’m upset,” you’re making a causal claim. The puzzles about normality and selection affect how we assign blame.
- Artificial intelligence and machine learning — Causal models are used in AI to make predictions and to reason about what would happen under different conditions. The philosophical debates about what causation is affect how these models are built and interpreted.