Philosophy for Kids

What Does It Mean for One Thing to Cause Another?

Here’s a puzzle. Every morning, your alarm goes off, and then you wake up. The alarm and your waking happen together, every day. But is the alarm causing you to wake up? Probably yes—that’s what alarms are for. But now consider this: every morning, the sun rises, and then you wake up. The sun rising and your waking happen together, every day too. But is the sun rising causing you to wake up? Probably not—you’d wake up even if it were cloudy.

Here’s the thing: both pairs of events—alarm-and-waking, sunrise-and-waking—look exactly the same in terms of pattern. One event follows the other, regularly. So what makes one a genuine cause and the other just a coincidence? This is the central question that philosophers have been fighting about for centuries. And the answers they’ve come up with are weirder and more interesting than you might expect.

The Basic Idea: Causes Are Just Patterns

The simplest answer, defended most famously by the Scottish philosopher David Hume in the 1700s, is this: a cause is just an event that’s regularly followed by another event. Nothing more. When you say “the alarm caused me to wake up,” all you’re really saying is:

  1. The alarm happened right before the waking.
  2. The alarm was right next to you (spatially close).
  3. Every time something like the alarm happens, something like the waking follows.

No hidden “oomph” or mysterious force connects them. No secret necessary connection. Just pattern. This is called a regularity theory of causation.

Hume was a radical. He argued that when we look at the world closely, we never actually see causation. We see one billiard ball roll up to another, and then the second one moves. But do we see the causing? No. We see the first ball moving, then we see the second ball moving, and we feel a mental habit—an expectation—that this will happen again. The “necessary connection” we feel between cause and effect, Hume said, is something our minds add to the world, not something we find in it.

This was shocking at the time. It still is, if you sit with it. Hume was saying that the universe, at bottom, is just a giant collection of individual facts—billiard balls at position A at time 1, at position B at time 2—and causation is just a pattern we notice in those facts. There’s no deeper glue holding things together.

Why This Gets Complicated Quickly

The regularity theory sounds clean, but it runs into problems fast. Let’s look at three big ones.

Problem 1: Singular causes. Imagine a giant meteor hitting Earth 66 million years ago, causing the dinosaurs to go extinct. That’s a cause, right? But according to the regularity theory, for the meteor to cause the extinction, there needs to be a pattern: all giant meteors hitting Earth must be followed by dinosaur extinction. But that’s false. The next giant meteor won’t do it—dinosaurs are already gone. So the regularity theory seems to say that the meteor didn’t cause the extinction, which is absurd.

Problem 2: Spurious regularities. The rooster’s crow is regularly followed by sunrise. But the rooster doesn’t cause the sun to rise. We all know this. But the regularity theory, in its simplest form, has no way to tell the difference between real causal connections and accidental ones.

Problem 3: Common causes. Here’s a trickier version of problem 2. Suppose a factory whistle blows at 5 PM (let’s call this event C). It causes workers in Manchester (event A) to stop work, and it also causes workers in London (event B) to stop work five minutes later (they have a delay mechanism). Now, the Manchester whistle and the London workers stopping are regularly connected. But the Manchester whistle is NOT a cause of the London workers stopping—they’re both effects of the same cause (the 5 PM factory whistle). The regularity theory has a hard time explaining why this isn’t a real causal relation.

Philosophers have spent enormous energy trying to fix these problems while keeping the basic idea that causation is just about patterns.

Refining the Pattern: INUS Conditions

The philosopher J.L. Mackie came up with an important refinement in the 1970s. He noticed that causes are rarely sufficient on their own to produce an effect. Usually, a bunch of things have to come together.

Think about a house burning down. What caused it? A short-circuit? Yes, but only if oxygen is present, and if no sprinkler system was working, and if the house was made of flammable materials. The short-circuit is just one factor in a whole cluster of factors that together are sufficient for the fire. But there are other clusters too: an arsonist with gasoline, oxygen present, no sprinklers, etc. Any one of these clusters would do the job.

Mackie called each factor in these clusters an INUS condition: an Insufficient but Non-redundant part of an Unsufficient but Sufficient condition. That’s a mouthful, but the idea is simple. The short-circuit alone isn’t enough to burn the house (that’s the “insufficient” part). But it’s also not irrelevant—the short-circuit is needed for that particular cluster to work (that’s the “non-redundant” part). And the whole cluster of factors (short-circuit + oxygen + no sprinklers) is sufficient for the fire, but not necessary, because other clusters (arsonist + gasoline + oxygen) would also work.

This helps with problem 1 (the meteor). The meteor is part of a cluster of factors that, together, were sufficient for the extinction. We don’t need every meteor to cause extinction—we just need this particular cluster to be sufficient. But Mackie’s theory still struggled with problem 3 (common causes). In the whistle example, the Manchester whistle ends up looking like an INUS condition for the London workers stopping, which it shouldn’t be.

The Inferential Turn

Some philosophers got frustrated with trying to analyze causation purely in terms of patterns of events. They argued that we should think about causation in terms of inference—what we can rationally conclude from what.

Here’s the basic idea of an inferential theory: C causes E if, given our background knowledge and the fact that C happened, we can infer that E happened (or will happen). This might sound similar to the regularity theory, but it adds something crucial: the inference has to go through laws of nature or causal mechanisms, not just any old pattern.

The logical empiricists, a group of philosophers in the early 20th century, tried to develop this idea. They proposed the deductive-nomological (DN) model: C causes E if E can be logically deduced from C together with some laws of nature. So the alarm causes waking because: (alarm + law about alarms and human hearing + law about sleep cycles + etc.) logically implies waking. The sunrise doesn’t cause waking because there’s no law connecting sunrises to your waking—at least not in the relevant way.

But this ran into a problem called the symmetry problem. Consider a tall tower and its shadow. Given the height of the tower and the position of the sun, you can deduce the length of the shadow. That seems causal. But you can also go backwards: given the length of the shadow and the position of the sun, you can deduce the height of the tower. That doesn’t seem causal—towers’ heights aren’t caused by their shadows. The DN model has trouble distinguishing these.

Ranking Functions and Belief

One of the most sophisticated inferential approaches comes from Wolfgang Spohn, a German philosopher. Spohn uses something called ranking functions to represent how strongly we believe or disbelieve things.

Here’s the intuition. You have a set of beliefs about how the world works. When you learn that C happened, you update your beliefs. If that update makes you believe E more strongly than you did before (compared to if C hadn’t happened), then C is a reason for E. And if C is temporally prior to E, then C is a cause of E.

This handles the common cause problem nicely. In the whistle example, learning that the Manchester whistle blew doesn’t make you believe the London workers stopped any more than you already did—because you already know it’s 5 PM, and that’s what causes both. The Manchester whistle gives you no new information about the London workers. So it’s not a cause.

But this raises a strange question: is causation in the world, or is it in our heads? Spohn’s theory is explicitly epistemic—it’s about what we should infer based on what we know. If nobody had minds, would causation still exist? Some philosophers (including Hume, arguably) say yes—the patterns are real, even if nobody notices them. Others say no—causation is fundamentally about how we organize and predict our experience.

Where Things Stand Now

Modern regularity and inferential theories have become quite technical and powerful. They can handle many of the tricky cases that tripped up earlier versions. But they still face challenges.

The biggest one is probably the direction of causation problem. Why do causes always come before effects? The regularity theory just stipulates this (no backwards causation allowed), which feels a bit like cheating. And even temporal precedence isn’t always straightforward—what about a pendulum’s period being determined by its length? The length and the period don’t seem to have a clear temporal order, yet we’d say the length “causes” the period.

Another challenge: can regularity and inferential theories handle probabilistic causation? If smoking raises your chance of getting lung cancer from 1% to 10%, but most smokers never get cancer, is smoking a cause? The simple regularity theory says no—there’s no invariable pattern. But that seems wrong.

Finally, there’s the question of whether these theories are truly reductive—whether they analyze causation in terms of something more basic. Many contemporary versions end up using causal notions in their analysis (like “causal law” or “causal model”), which means they’re not really explaining what causation is, just organizing it. This might be fine, but it’s less ambitious than what Hume started.

What’s at Stake

You might wonder why philosophers care so much about this. Here’s one reason: causation is everywhere. Every time you say “because,” every time you explain why something happened, every time you decide what to do based on what will happen next—you’re relying on some idea of causation. If we don’t understand what causation is, we don’t fully understand our own thinking.

There’s also a practical side. In courts, lawyers argue about what caused someone’s injury. In medicine, doctors argue about what caused a disease. In science, researchers argue about what causes climate change or cancer. If these arguments rest on a concept we don’t understand, that’s a problem.

The regularity and inferential theories offer one vision: causation is ultimately just a pattern, or a habit of inference. There’s no deep metaphysical mystery. You can be a perfectly good scientist or judge without believing in causal powers or necessary connections. All you need is careful observation of patterns and rational inference.

Other philosophers find this deeply unsatisfying. They want causation to be a real thing in the world—a force or power that connects events, not just a pattern we notice. But that’s a different story, and a different debate.

For now, the regularity and inferential approaches remain active research programs. They’ve survived decades of criticism, and they keep getting more sophisticated. Whether they’ll ultimately succeed is something nobody really knows.


Key Terms

TermWhat it does in this debate
Regularity theorySays causation is just a pattern of one type of event always following another type
INUS conditionA factor that’s part of a cluster that’s sufficient for an effect, but not necessary and not sufficient on its own
Inferential theorySays causation is about being able to infer the effect from the cause using background knowledge and laws
Deductive-nomological modelSays C causes E if E can be logically deduced from C plus laws of nature
Spurious causationA regular connection that looks causal but isn’t (like rooster and sunrise)
Common causeWhen two events are both caused by a third event, making them look causally related when they’re not

Key People

  • David Hume (1711–1776): Scottish philosopher who argued that we never observe causation directly—only patterns of events and our own mental habits. His work started the whole debate.
  • J.L. Mackie (1917–1981): Australian philosopher who developed the INUS condition analysis of causation, trying to fix problems with simple regularity theories.
  • Wolfgang Spohn (born 1950): German philosopher who uses ranking functions (a way of measuring belief) to analyze causation in terms of rational inference.
  • Michael Strevens (born 1966): Philosopher of science who developed the “kairetic” account, connecting causation to explanation and causal models.

Things to Think About

  1. Think of something you know is a cause (like flipping a light switch causing the light to turn on). Can you trace what you actually see or experience that makes you call it a cause? Is it just the pattern? Or does it feel like something more?

  2. If causation is just a pattern, then how do we know the pattern will continue? The sun has risen every day so far—does that mean it will rise tomorrow? Hume famously said there’s no logical reason to think so. Does that bother you?

  3. Imagine a universe where everything that happens happens exactly once, and nothing repeats. Could there be causation in such a universe? If not, does that mean causation requires repetition?

  4. Some philosophers think causation is objective (real in the world) and others think it’s subjective (something we impose on the world). Can you think of a test that would decide between these views?

Where This Shows Up

  • Courtrooms: Judges and lawyers regularly debate what “caused” an accident or injury. The legal concept of “but-for” causation (the event wouldn’t have happened “but for” the cause) is closely related to these philosophical theories.
  • Medicine: When a doctor says “smoking caused your lung cancer,” they’re usually making a statistical claim about patterns, not claiming to see the causation directly.
  • Machine learning: Computers that learn causal relationships from data are essentially implementing sophisticated versions of regularity theories—finding patterns in data and using them to make predictions.
  • Everyday arguments: When you argue with a friend about what caused something (why the team lost, why your phone broke), you’re both implicitly using some theory of causation.