What Does It Mean for One Thing to Cause Another? (And What If You Could Just Change It?)
Imagine you’re playing a video game. You press a button on the controller, and on the screen, your character jumps. That seems straightforward: pressing the button causes the jump. But think about it for a second. How do you know that the button press really causes the jump? What if it’s just a coincidence? What if some hidden program is really making both happen, and the button is just a decoy?
Now imagine something stranger. Suppose you’re a scientist and you want to know whether a new drug really causes people to get better. You can’t just give it to people and watch. Maybe people who take the drug are also more likely to exercise. Maybe they’re more likely to believe they’ll get better, and that belief itself is what helps them. How can you tell what’s actually causing what?
This is a puzzle that has bothered philosophers for a long time. And one answer that many scientists and philosophers have found useful is surprisingly simple: Causation is about what would happen if you could reach in and change things.
The Basic Idea: Manipulation and Causation
Here’s the core thought: If X truly causes Y, then if you could somehow reach in and change X—without messing with anything else—Y would change too. If pressing the button causes the character to jump, then if you pressed the button, the character would jump. And if you didn’t press it, the character wouldn’t jump. (Assuming nothing else weird is going on.)
This sounds obvious. But philosophers noticed that this idea, if you take it seriously, might actually be the definition of what causation is. In other words: “X causes Y” just means “if you manipulated X in the right way, Y would change.”
This is called a manipulability theory of causation. The philosopher Douglas Gasking put it this way in 1955: “A cause is something that can be used as a means for bringing about its effect.” And a group of statisticians named Cook and Campbell (who wrote a famous book on how to design experiments) said something very similar: “The paradigmatic assertion in causal relationships is that manipulation of a cause will result in the manipulation of an effect.”
But here’s where things get interesting—and tricky.
The First Problem: What Counts as “Manipulation”?
If you think about it, the word “manipulation” is itself a causal idea. When you manipulate something, you cause it to change. So if you define “cause” in terms of “manipulation,” you’re using a causal idea to explain causation. That seems circular. It’s like saying “up is the opposite of down, and down is the opposite of up”—you haven’t really explained anything.
Two philosophers, Peter Menzies and Huw Price, tried to escape this circularity in a clever way. They said: we all have direct experience of doing things as agents. You know what it’s like to decide to raise your hand and then feel it go up. That experience doesn’t depend on any fancy philosophical definition of causation. It’s just something you feel. So, they argued, we can use this experience of agency as the foundation: an event A causes a distinct event B just in case bringing about A would be an effective means by which a free agent could bring about B.
This sounds promising. But it runs into two big problems.
Problem one: What does “free agent” even mean? If the experimenter freely chooses to set the barometer dial to a certain reading (because he saw the atmospheric pressure and wants to make the dial match), then the dial reading and the storm might still be correlated—even though the dial doesn’t cause the storm. In other words, a free action can produce correlations that aren’t causal. And if the experimenter’s action is caused by something that also causes the effect (like the experimenter unknowingly giving a placebo drug while also doing something else that helps patients), then you still can’t tell what’s cause and what’s coincidence. So freedom alone doesn’t solve the puzzle.
Problem two: What about things humans can’t possibly manipulate? Consider the claim “The gravitational attraction of the moon causes the tides.” Nobody can reach out and change the moon’s gravitational pull. So on Menzies and Price’s account, how can this be a genuine causal claim? They tried to answer this by saying we can imagine similar situations where we can manipulate things—like a laboratory model of the moon and tides—and then infer that the real moon and tides work the same way. But this just kicks the can down the road: how do we know the laboratory model is really similar in the causal respects? That itself seems to require knowing something about causation already.
The Second Problem: Anthropocentrism
There’s a deeper worry here. If causation is fundamentally tied to human agency—to what we can do—then causation seems to be a human-centered concept. But surely causation existed before humans existed. The dinosaurs went extinct because an asteroid hit the Earth. That was a causal relationship that didn’t involve any humans. How can a theory that defines causation in terms of human action account for that?
This objection seems powerful. But it also points toward a way out.
Enter the “Intervention” (Not Just Human Action)
What if we drop the idea that manipulation has to involve human beings or free actions? What if we just talk about an intervention—a change in X that meets certain purely causal conditions, whether or not any human does it?
Here’s the key idea. Think of an intervention on X as a kind of “surgical” change: it completely breaks the connection between X and its normal causes, it sets X to a new value, and it doesn’t affect Y through any route that doesn’t go through X. If such an intervention on X brings about a change in Y, then X causes Y.
Notice: this definition of “intervention” uses causal language. It talks about “breaking the connection” and “not affecting Y through other routes.” So we haven’t escaped circularity. But maybe that’s OK. The point isn’t to give a non-circular definition of causation. The point is to make clear what we’re committed to when we make a causal claim, and to connect causal claims to other things we care about—like what would happen if we performed an experiment.
This version of the theory is much more popular among scientists and statisticians. The computer scientist Judea Pearl has developed it in great detail, using systems of equations and something called “causal graphs” (diagrams where arrows represent causal relationships). In Pearl’s framework, an intervention on X means replacing the equation that normally governs X’s value with a new equation that just sets X equal to some fixed number. This “breaks” the arrows going into X from its causes, but leaves all other arrows intact. Then you can calculate what Y would be.
What This Theory Can Do (Even Though It’s Circular)
Even though interventionist theories are “circular” in the sense that they use causal ideas to explain causation, they’re far from trivial. Here’s why:
First, they tell us what we need to know to identify causes. If you want to know whether X causes Y, you need to know what would happen to Y under an ideal experiment where you intervene on X in just the right way. This is enormously useful for science. It tells scientists what they should try to do: perform experiments or find “natural experiments” (situations in nature that happen to have the right properties) where X is changed in a way that’s independent of other factors.
Second, they help us distinguish different kinds of causal claims. For instance, sometimes X is a “total cause” of Y (changing X changes Y overall) but not a “direct cause” (the change happens through intermediate steps). And sometimes X is a “direct cause” of Y but not a “total cause” because the effects cancel out. Consider a system where X causes Y through two pathways: one positive and one negative, and they exactly cancel. An intervention on X won’t change Y at all. Is X a cause of Y? In one sense, no (it doesn’t make a difference overall). In another sense, yes (it’s directly connected). Interventionist theories give us the tools to make these distinctions precise.
Third, they can handle tricky cases like omissions. Did the gardener’s failure to water the plant cause it to die? Many people say yes. But some theories of causation (like those that require a physical “push” or transfer of energy) say no. Interventionist theories say yes: if you intervened to change whether the gardener waters, the plant’s survival would change. So omissions count as causes. This matches our ordinary way of thinking, though philosophers still argue about whether this is right.
Fourth, they can handle “preemption” cases. Suppose two gunmen are aiming at a victim. Gunman 1 shoots and kills the victim. Gunman 2 was also going to shoot, but didn’t because Gunman 1 already did. Did Gunman 1’s shot cause the death? Obviously yes. But here’s the puzzle: if Gunman 1 hadn’t shot, Gunman 2 would have, so the victim would have died anyway. The death does not “counterfactually depend” on Gunman 1’s shot. Many theories of causation have trouble with this. Interventionist theories handle it by considering combinations of interventions: if you intervene to fix Gunman 2 at “doesn’t shoot,” then changing whether Gunman 1 shoots changes whether the victim dies. This identifies Gunman 1 as the cause.
But Can Interventions Be Impossible?
This is where things get really interesting—and controversial.
Pearl’s version of the theory says that when we ask “What would happen to Y if we intervened to set X to value x?” we don’t need to worry about whether such an intervention is physically possible. We just set X in our imagination and calculate. On this view, it’s fine to say “If we set the moon’s gravitational pull to half its current value, the tides would be lower.” Even though we can’t actually do that.
But some philosophers think this is too permissive. They think interventions need to be possible in some stronger sense. The question is: what sense?
Consider the claim “Being a raven causes being black.” Can we intervene to change whether something is a raven? What would that even mean? If we try to imagine a raven turning into a non-raven (or vice versa), we seem to run into conceptual confusion. We don’t have a clear idea of what such a change would involve. So maybe we shouldn’t treat “ravenness” as a genuine cause of blackness. Instead, we should look for more specific, manipulable factors—like the genetic and biochemical mechanisms that produce black feathers in ravens.
This way of thinking is common among statisticians like Paul Holland and Donald Rubin. They argue that causal claims about things like race, gender, or species are unclear or even meaningless from a scientific point of view, because we have no coherent idea of what it would mean to intervene on these variables. (Though of course racism and sexism are real social problems—the point is just that the causal claims involved might need to be stated more carefully.)
The jury is still out on this. Some philosophers think the “setting” version of interventionism (where you just imagine setting the variable) is perfectly fine and captures real causal claims. Others think the “possibility-constrained” version (where there must be a coherent notion of actually changing the variable) is more defensible. The debate connects to deep questions about what kinds of things can be causes and what it means for a counterfactual to be true.
Do Causal Concepts Apply to the Whole Universe—or to Physics?
Here’s another mind-bending question. Suppose we have a physical theory that describes the entire universe. The state of the universe at time t causes the state at time t + 1. Can we make sense of this as a causal claim in interventionist terms? What would it mean to intervene on the state of the whole universe? There’s nothing outside the universe that could do the intervening. Pearl himself says: “If you wish to include the whole universe in the model, causality disappears because interventions disappear—the manipulator and the manipulated lose their distinction.”
Some philosophers think this shows that causal concepts don’t really apply to the universe as a whole, even though they work perfectly well for things within the universe. Others think we can still make sense of the idea by “setting” the state of the universe to a different value in our imagination and seeing what the laws say would happen.
Similar issues arise in fundamental physics. In Einstein’s general theory of relativity, there’s a relationship between the distribution of matter (the stress-energy tensor) and the shape of spacetime (the metric). Is this a causal relationship? Some say yes, some say no. Interventionist theories give us a way to think about the question: can we coherently imagine an intervention on the matter distribution that would change the spacetime shape? Or would any such intervention inevitably violate the conditions for a genuine intervention? Different versions of interventionism give different answers.
So What’s the Verdict?
The manipulationist approach to causation is not a finished theory. Philosophers still argue about many of the issues raised above. But it has proven enormously useful in science, statistics, and everyday thinking. It gives us a clear way to think about what causal claims mean: they’re claims about what would happen under ideal experiments. And it gives us tools for actually figuring out what causes what, even when we can’t do experiments.
The circularity that bothered earlier philosophers turns out not to be fatal. We don’t need to define causation in non-causal terms to have a useful and informative theory. We just need to show how different causal concepts are connected to each other and to things we can observe. The manipulationist framework does this remarkably well.
As for the big questions—what kinds of things can be causes, whether causation applies to fundamental physics, whether interventions must be possible—these remain open. The philosophers who argue about them are not confused. They’re trying to figure out the shape of one of the most basic ideas we have: the idea that one thing can make another thing happen.
Key Terms
| Term | What It Does in This Debate |
|---|---|
| Causation | The relationship between two events or variables where one brings about the other |
| Manipulability theory | The view that causation should be understood in terms of what would happen if you could change the cause |
| Intervention | A surgical change to a variable that breaks its connection to its normal causes and only affects the effect through that variable |
| Setting intervention | The idea that you can just imagine setting a variable to a new value, without worrying about whether it’s physically possible |
| Possibility-constrained intervention | The idea that for a genuine causal claim, the intervention must be possible in some non-trivial sense |
| Counterfactual | A “what if” statement about what would happen if something were different |
| Total cause | A cause that makes an overall difference to the effect when you change it |
| Direct cause | A cause that affects the effect even when you hold all intermediate variables fixed |
| Preemption | A situation where one cause “beats” another cause to the effect, so the effect doesn’t counterfactually depend on either one alone |
Key People
- Peter Menzies and Huw Price – Two philosophers who developed an “agency theory” of causation, arguing that our direct experience of doing things gives us the concept of causation without circularity
- Judea Pearl – A computer scientist who developed a detailed mathematical framework for interventionist theories of causation, using equations and causal graphs
- James Woodward – A philosopher who developed a version of interventionism that explicitly characterizes interventions in causal terms, giving up on the goal of reduction
- Nancy Cartwright – A philosopher who criticized interventionist theories as being too “operationalist” and “monolithic,” arguing that causation involves many different criteria
- Paul Holland and Donald Rubin – Statisticians who argued that causal claims about things like race or gender are unclear because we can’t coherently imagine intervening on them
Things to Think About
-
Suppose someone says “Being born in a rich country causes you to have more opportunities in life.” Is this a meaningful causal claim? What would it mean to “intervene” on where someone is born? Does the answer affect whether you think the claim is true or just unclear?
-
Consider a case of “double prevention”: A guard is supposed to prevent a prisoner from escaping. Another guard knocks the first guard unconscious, so the first guard can’t prevent the escape, and the prisoner escapes. Did the second guard’s action cause the escape? There’s no physical connection between the knockout and the escape. Interventionist theories say yes. Causal process theories say no. Which seems right to you?
-
If someone says “My lucky socks cause my team to win,” is this a causal claim that could be true? What would an interventionist say about it? Does whether you can actually test the claim matter for whether it’s meaningful?
-
The philosopher David Lewis once said that causation is about “the difference one thing makes to another, all else being equal.” Is this the same as the interventionist view? If not, what’s the difference?
Where This Shows Up
- Medical trials: When doctors test whether a drug works, they’re essentially trying to perform an intervention: give some people the drug and others a placebo, making sure nothing else is different between the groups
- Economics: Economists use something called “instrumental variables” to figure out causal relationships from data where experiments aren’t possible. The idea is to find something that acts like a natural intervention
- Machine learning: Computer scientists have developed algorithms that learn causal relationships from data by looking for patterns that would be stable under interventions
- Everyday arguments: When you argue with a friend about whether something caused something else, you’re usually making claims about what would happen if things were different. “If you hadn’t said that, she wouldn’t have gotten mad.” That’s a manipulationist claim