Philosophy for Kids

What Makes a Good Explanation? How Scientists (and Philosophers) Think About Causes

You’re sitting in science class, and your teacher asks: “Why did the beaker of water freeze?” You might say, “Because the temperature dropped below zero degrees Celsius.” That seems like a decent answer. But now imagine someone asks: “Why did the window break?” You say, “Because a rock hit it.” Again, that seems fine. But here’s a strange thing philosophers noticed: these simple answers leave out almost everything that’s actually happening. How does cold temperature make water turn into ice? How does a rock hitting glass make it shatter? If we wanted a really complete answer, we’d have to describe every tiny step in the process—atoms moving, bonds forming or breaking, energy transferring.

Philosophers call these step-by-step descriptions mechanisms. And for the last few decades, a lot of them have argued that the best scientific explanations are the ones that describe mechanisms in detail. But as you might guess, this gets complicated fast. How much detail is enough? When is a mechanism description actually better than a simple “because”? And are there explanations that don’t appeal to mechanisms at all?


The Mechanistic Picture: How Things Actually Work

Imagine you’re trying to explain how a bicycle works. You could say, “You pedal and the wheels turn.” That’s true, but it’s not very helpful if someone wants to understand the bike. A mechanistic explanation would describe all the parts—pedals, chain, gears, wheels—and show how each part’s movement causes the next part to move, eventually making the bike go.

Philosophers who study mechanisms think scientific explanations should work the same way. When biologists explain how a nerve cell sends an electrical signal, they don’t just say “the cell fires.” They describe the cell membrane, the ion channels that open and close, how sodium and potassium move in and out, and the timing of each step. That’s a mechanism.

This might sound obvious—of course you’d want to know the steps! But there’s a real debate here. Some philosophers, like Carl Craver, argue that the more detail you include, the better your explanation is. If your model of the nerve cell doesn’t describe the molecular details of how the channels open, then your explanation is incomplete—it’s what Craver calls a “mechanism sketch.” It’s like having a diagram of the bicycle but leaving out how the chain connects to the gears. The sketch isn’t wrong, exactly, but it’s not fully satisfying.

Other philosophers push back. They point out that sometimes adding more detail actually makes an explanation worse, not better. Imagine trying to explain why a particular species of bird migrates south for the winter. You could describe every muscle twitch and air current for one specific bird’s journey. But that would tell you almost nothing about why migration happens in general—it would be too specific. Sometimes you want to leave out details to see the big pattern.

This debate about “how much detail?” is surprisingly hard to settle. It touches on deeper questions about what we even want from explanations. Do we want the most precise description of what actually happened? Or do we want to understand why things happen the way they do, in a way that could apply to other similar situations?


The Abstraction Approach: Cutting Away What Doesn’t Matter

Michael Strevens, a philosopher at New York University, has a very different picture. He thinks good explanations come from abstracting away from all the messy details, keeping only what makes a real difference to the outcome.

Here’s how it works in his head. Imagine you want to explain why a specific window shattered at noon on Tuesday. Fundamentally, this involves an incredibly complicated set of physical events—air molecules bouncing, glass molecules vibrating, cracks propagating. But most of that detail is irrelevant to the simple question “why did it break?” It doesn’t matter exactly which molecule hit first, or the precise shape of the cracks, or the exact time. What matters is that a rock hit the window with enough force.

So Strevens says: take all the causal information, from physics if you want to go that deep, and then strip away everything that isn’t necessary to guarantee that the window breaks. What’s left is the explanation.

This is elegant, but it runs into a problem. Suppose two very different things can cause the same outcome. A window can break from a rock or from a sonic boom. If you try to make a single explanation that covers both cases, you might end up with something like “the window broke because something hit it hard enough”—which is true but feels like cheating. Strevens says we need a “cohesion” requirement: the different ways of causing the outcome must be similar enough, from the perspective of physics, that grouping them together makes sense. Rock impacts and sonic booms are too different. But different types of rock impacts (big rock, small rock, fast throw, slow throw) might be similar enough.

This is a clever idea, but critics point out that scientists routinely use explanations that cover very different physical situations. The Lotka-Volterra equations describe predator-prey relationships in lions and zebras, spiders and flies, and all sorts of other pairs that are physically very different. These equations explain something real—why populations oscillate—even though the details of how lions hunt are nothing like how spiders catch flies. If Strevens is right, these explanations are somehow defective. But scientists don’t seem to think so.


The Interventionist View: Asking “What If?”

Here’s a totally different approach, developed by philosopher James Woodward. Instead of focusing on mechanisms or abstraction, he asks: what makes an explanation useful?

Woodward’s answer is that a good explanation lets you answer what-if-things-had-been-different questions (he calls them “w-questions”). If I explain why the field strength around an electrical wire is a certain value by describing the charge density and the distance from the wire, then I’ve given you the tools to figure out what would happen if the charge density were higher, or if you moved farther away. A good explanation tells you how the outcome depends on the inputs.

This is actually a really natural way to think about science. When engineers design bridges, they need to know not just “this bridge holds” but “how would it behave under different loads?” When doctors prescribe medicine, they need to know not just “this drug works” but “what would happen if we changed the dose?” An explanation that only describes what happened, without telling you what would happen under different conditions, isn’t very useful.

But here’s where it gets tricky. According to this view, lower-level explanations (in terms of molecules and atoms) can always answer more w-questions than higher-level ones (in terms of temperature and pressure). That’s because there are more things you could change at the lower level. So this seems to imply that the deepest explanations are always the most fine-grained—which brings us back to the debate about detail we saw earlier.

Some philosophers push back hard. They argue that sometimes higher-level explanations are actually better because they capture patterns that get lost in all the lower-level noise. The fact that temperature explains gas behavior is useful precisely because it ignores all the individual molecular collisions—those details would make it harder to see the general pattern.


Can There Be Non-Causal Explanations?

So far, all the views we’ve discussed assume that explanations are about causes. But some philosophers wonder: are there explanations that don’t appeal to causes at all?

Here’s a famous example. In the 18th century, the city of Königsberg had seven bridges connecting different parts of the city. People wondered: can you walk through the city crossing each bridge exactly once? The mathematician Euler proved the answer was no—and he did it by looking at the structure of the bridges (which islands connected to which), not by anything about how people walk. The fact that people have two legs, that they get tired, that they walk at different speeds—none of that matters. The impossibility comes from the pure structure.

This feels like a non-causal explanation. The mathematical structure explains why you can’t do it, and it doesn’t matter what causes are operating. Similarly, you can explain why 23 strawberries can’t be evenly divided among three kids by appealing to arithmetic—the number 23 just isn’t divisible by 3. That’s a mathematical fact, not a causal one.

Some philosophers, like Michael Strevens, try to argue that even these cases are really about causes. The bridge structure is representing constraints on causal processes—you can only move along bridges, so the structure shapes what causal paths are possible. But others think that’s stretching the idea of “cause” too far. After all, arithmetic doesn’t cause anything. Twenty-three being indivisible by three isn’t a cause of anything—it’s just a truth about numbers.

This debate connects to something deeper: what do we even mean by “cause”? And should we expect everything to be explainable in causal terms? These questions are still very much unsettled.


Why This Matters

You might be thinking: okay, these are interesting puzzles, but who cares? Actually, these debates affect real science. When biologists design experiments, they need to decide what level to study. Should they look at the molecular details of how cells work, or at the large-scale behavior of whole organisms? The answer depends on what counts as a good explanation.

Similarly, when policymakers decide whether to ban a chemical, they need to know what explains its health effects. Is it enough to show that exposure correlates with disease, or do you need to understand the mechanism? Different views about explanation lead to different answers.

And for you, these questions might change how you think about your own explanations. Next time a friend asks “why did that happen?” try asking yourself: what would count as a good answer? Would you want the step-by-step mechanism, the stripped-down essential factors, or the ability to predict what would happen if things were different? There might not be a single right answer—but thinking about it is a good start.


Appendices

Key Terms

TermWhat it does in this debate
MechanismA step-by-step description of how parts work together to produce an outcome
AbstractionThe process of stripping away unnecessary details to find what really matters
Interventionist counterfactualA “what if” question about what would happen if you changed one thing
w-questionsWhat-if-things-had-been-different questions; the mark of a good explanation according to interventionists
CohesionA requirement that the different ways of causing an outcome must be physically similar enough to group together
Non-causal explanationAn explanation that works without appealing to causes—often using math or structure

Key People

  • Carl Craver – A philosopher who argues that the best explanations are detailed mechanisms, and that anything less is just a “sketch”
  • Michael Strevens – A philosopher who thinks good explanations come from abstracting away irrelevant details, keeping only what makes a difference
  • James Woodward – A philosopher who measures explanations by how well they answer “what if” questions about the world
  • Michael Friedman – Not discussed in detail here, but a philosopher who argued good explanations show how many different phenomena follow from a few simple principles

Things to Think About

  1. You’re explaining to a friend why they got a bad grade on a test. Which approach seems best—giving the step-by-step mechanism (you didn’t study, you were tired, you misread question 3), abstracting to what matters most (you didn’t know the material), or showing what would have changed the outcome (if you had studied more, would you have passed?)? Are these really different explanations?

  2. Can you think of a situation where knowing more detail actually makes the explanation worse? For example, is it better to explain what a cell does by describing every molecule, or by saying “it produces energy”?

  3. The Königsberg bridge problem seems non-causal. But could you argue that it is causal after all? What would that argument look like? Would it be convincing?

  4. If two scientists disagree about which explanation is better, and they hold different philosophical views about explanation, is there any way to settle their disagreement? Or is it just a matter of opinion?

Where This Shows Up

  • In science class: When a textbook describes how photosynthesis works step-by-step, that’s a mechanistic explanation
  • In medicine: When doctors say “smoking causes lung cancer,” they mean there’s a causal relationship—but they might not know the exact mechanism. Is this still a good explanation?
  • In video game design: If a game has a physics engine, the developer needs to decide how detailed the simulation should be. Too much detail and the game slows down; too little and it looks fake. This is exactly the “how much detail?” debate
  • In everyday arguments: When someone says “the reason X happened is because of Y,” they’re making a claim about what matters. Philosophical debates about explanation help you see what’s at stake in those claims