What Do You Really Know? The Puzzle of Epistemic Closure
Here’s a strange thing philosophers noticed about knowledge. Suppose you’re standing at the zoo in front of a cage marked “Zebra.” Inside is a perfectly ordinary zebra. You look at it, you see its stripes, and you think: “I know that’s a zebra.”
Now, if it’s a zebra, then it’s definitely not a cleverly disguised mule painted to look like a zebra. That’s just logic: if something is a zebra, it can’t be a mule in disguise. So you think to yourself: “I know it’s not a mule painted to look like a zebra.”
But wait—do you really know that? Could you tell the difference? If there were a mule painted to look like a zebra standing in that cage, would it look any different to you? Probably not. What you see—the striped animal—would be exactly the same.
So here’s the puzzle: You seem to know that it’s a zebra. And you know that if it’s a zebra, it’s not a painted mule. But do you actually know that it’s not a painted mule? Many philosophers say no. And that creates a problem.
The problem is called epistemic closure, and it’s about whether knowing one thing means you also know the things that follow from it. It sounds simple, but it leads to some of the deepest arguments in philosophy—about skepticism, about what counts as evidence, and about whether we really know anything at all.
What Is Closure, Exactly?
Here’s the basic idea. Philosophers want to know if the following principle is true:
If you know that p is true, and you realize that p logically implies q, then you also know that q is true.
That seems reasonable, right? If I know that I’m holding a book, and I know that if I’m holding a book then I’m holding something physical, then I should know that I’m holding something physical. Logic seems to let us extend our knowledge.
But the zebra case suggests something weird. You know the zebra is there. You know that if it’s a zebra, it’s not a painted mule. Yet it feels like you don’t really know it’s not a painted mule—because you couldn’t tell the difference. So either:
- You actually do know it’s not a painted mule (which might seem strange), or
- You don’t actually know it’s a zebra either (which also seems strange), or
- The principle of closure is false—knowing one thing doesn’t always let you know the things that follow from it.
Each option has defenders. Let’s look at the main positions.
How Could Closure Fail?
Two influential philosophers, Fred Dretske and Robert Nozick, argued that closure fails. They thought that knowing something requires that your belief “tracks” the truth in a special way.
Here’s the tracking idea. To know that something is true, roughly, you need to be in a position where: if it were false, you wouldn’t believe it. That’s what tracking means—your belief follows the facts around.
Let’s test the zebra case with tracking.
Zebra belief: If it weren’t a zebra, would you still believe it was? Probably not. If the cage were empty, or there were an aardvark inside, you’d notice. Your belief tracks the truth.
Not-a-painted-mule belief: If it were a painted mule, would you still believe it wasn’t? Yes—because it would look exactly the same. Your belief doesn’t track the truth here.
So according to Dretske and Nozick, you genuinely know it’s a zebra. But you don’t know it’s not a painted mule. Knowledge isn’t closed under logical implication. Careful reasoning from what you know doesn’t always give you new knowledge.
This is a strange conclusion. It means you could be in a situation where you know p, you know p implies q, you believe q because of that, and yet you don’t know q. Knowledge doesn’t automatically pass through logical connections.
What’s at Stake? Skepticism
You might wonder why philosophers care so much about this. Here’s why: it connects directly to one of the oldest problems in philosophy—skepticism about the external world.
Consider this: You think you know you have hands. But if you have hands, then you’re not just a brain in a vat being fed fake experiences by scientists. Those are logically connected.
But do you know you’re not a brain in a vat? Probably not—you can’t rule it out. And if closure is true, then not knowing you’re not a brain in a vat means you don’t know you have hands either. The skeptic wins.
If closure is false, you can say: “I know I have hands, even though I don’t know I’m not a brain in a vat.” That’s Dretske and Nozick’s solution to skepticism. We have ordinary knowledge, but it doesn’t reach into those weird skeptical possibilities. Knowledge has limits.
But many philosophers find this hard to accept. The closure principle seems so obvious—how could knowing something not let you know what follows from it? These philosophers try to find another way out.
Holding On to Closure: Safety
Philosophers who want to save closure offer a different account of knowledge. Instead of tracking (if p were false, you wouldn’t believe it), they say knowledge requires safety (your belief couldn’t easily be wrong).
This gets technical, but here’s what it accomplishes. If safety is what matters, then in the zebra case, you do know it’s not a painted mule. Why? Because in the actual situation, your visual experiences safely indicate that there’s a zebra. And if there’s a zebra, there’s definitely not a painted mule. So your experiences also safely indicate that fact.
The safety account preserves closure. If you safely know something, you also safely know its logical consequences. The zebra case no longer causes trouble.
But safety has its own problems. Consider the “red barn” case. Imagine you’re driving through an area where many fake barns have been set up. They’re all blue, though. You look at a real barn that happens to be painted red. You think: “I know that’s a red barn.”
Do you? On the safety account, maybe. Your red-barn experiences safely indicate a red barn—since the fake barns are all blue. But what about “I know that’s a barn”? If you were looking at a blue fake, you’d still think “barn.” So your belief “barn” isn’t safe.
Here’s the tricky part: “red barn” implies “barn.” If closure holds, and you know it’s a red barn, you should know it’s a barn. But many people feel you don’t. You got lucky with the color, but you’re in a neighborhood full of barn fakes. Some philosophers say you don’t really know either thing.
The debate gets messy. But notice what’s happening: trying to figure out what knowledge is (tracking? safety? something else?) determines whether closure holds. And whether closure holds determines whether skeptics can defeat our ordinary claims to know things.
The Lottery Problem
Here’s another way closure causes trouble, and it doesn’t require weird skeptical scenarios.
Suppose you buy a ticket in a lottery with a million tickets. You know the odds of winning are tiny. Do you know you’ll lose? Most people say no—you could get lucky, and you don’t know you won’t.
But consider: I know I won’t buy a villa in France tomorrow (I’m broke). I also know that if I win the lottery, I would buy that villa. “I won’t buy the villa” plus “if I win, I’ll buy it” logically implies “I won’t win.” So if closure holds, I can deduce that I won’t win—which means I do know I’ll lose.
But that’s strange! It seems like I can’t know I’ll lose just by reasoning about my financial plans. Something has gone wrong.
One option: maybe I don’t actually know I won’t buy the villa. After all, who knows what could happen? But that seems too skeptical—there are lots of things I know I won’t do tomorrow.
Another option: maybe I do know I’ll lose the lottery. This sounds odd, but some philosophers bite the bullet. They say: given the odds, and given what you know, you genuinely know your ticket will lose. It just feels weird to say it.
A third option: deny closure again. Dretske and Nozick would say: you know you won’t buy the villa, but you don’t know you won’t win, even though it follows. Knowledge doesn’t always transmit through logic.
What About Justified Belief?
The closure debate isn’t just about knowledge. It’s also about whether justified or rational belief is closed under logic.
This part gets complicated, but here’s why it matters. Even if you deny closure for knowledge, you might still think that if you rationally believe something, you should rationally believe its consequences. But the lottery shows this is tricky.
Suppose you’re 99.9% sure your ticket will lose. That’s enough to be justified in believing it will lose. Same for ticket 2, ticket 3, and so on. But if you put all those justified beliefs together—“ticket 1 loses, and ticket 2 loses, and ticket 3 loses… all the way up to ticket 1,000,000”—you get the conclusion that no ticket will win. But that can’t be right; someone has to win.
So here’s a puzzle: you’re justified in believing each individual ticket will lose, but you’re not justified in believing they’ll all lose. The logical combination of justified beliefs gives you something unjustified. Some philosophers think this means justification itself can’t be closed under logic. Others think it means you’re never really justified in believing your ticket will lose—high probability isn’t enough for justification.
So What’s the Right Answer?
Philosophers still argue about this. Nobody has settled the closure debate.
What makes it so difficult is that both sides have good arguments. Closure seems like a basic rule of reasoning—if you know something, you should be able to reason from it and know what follows. But the zebra case and the lottery case make closure look false. And the skeptical threat makes us want closure to be false (so we can keep our ordinary knowledge) while also wanting it to be true (so logic works the way we expect).
The debate has forced philosophers to think hard about what knowledge really is. Is it tracking, safety, something else entirely? Your answer to that question determines whether closure holds. And your answer about closure determines how you respond to the skeptic.
This is one of those places in philosophy where the arguments go in circles—but productive circles. Each position reveals something interesting about knowledge that you might not have noticed before. The zebra at the zoo turns out to be a lot more puzzling than it looks.
Appendices
Key Terms
| Term | What it does in this debate |
|---|---|
| Epistemic closure | The idea that if you know something, you also know (or can come to know) what logically follows from it |
| Tracking | A theory of knowledge: you know something when your belief would change if the facts changed |
| Safety | A theory of knowledge: you know something when your belief couldn’t easily be wrong |
| Skeptical hypothesis | A scenario (like being a brain in a vat) that is hard to rule out but would make your ordinary beliefs false |
| Limiting/heavyweight proposition | A basic claim (like “there are physical objects”) that seems hard to know even though ordinary things imply it |
| Lottery proposition | A claim that is very probable but not certain, like “my ticket will lose” |
| Justified belief | A belief you have good reason to hold, even if it might turn out false |
Key People
- Fred Dretske – An American philosopher who argued that knowledge requires “conclusive reasons” and that closure fails; famous for the zebra example
- Robert Nozick – An American philosopher who argued knowledge requires “tracking” the truth, and that closure fails (also famous for work on political philosophy)
- G. E. Moore – An early 20th-century British philosopher who argued against skepticism by saying we know ordinary things and can reason from them; his approach is sometimes called “dogmatism”
Things to Think About
-
You’re looking at your best friend’s face. You know it’s them. You also know that if it’s them, it’s not a perfect impersonator. Do you know it’s not a perfect impersonator? Could you tell the difference? What does your answer suggest about closure?
-
Suppose closure is false. Does that mean you can never trust your reasoning to give you new knowledge? Or is there a difference between “this follows from what I know” and “I can know this by reasoning”?
-
In the lottery case, do you think you actually know your ticket will lose? Why does it feel strange to say yes?
-
If you deny closure to avoid skepticism, you save ordinary knowledge (you know you have hands) but give up the ability to know you’re not being fooled. Is that a fair trade?
Where This Shows Up
- Courtrooms – Lawyers argue about what a witness “really knows” versus what they’re just assuming follows from what they saw
- Science – Scientists reason from observations to conclusions about unobservable things (like electrons). Closure questions arise about whether they know those conclusions or just believe them
- Everyday arguments – When someone says “Well, if you know that, then you must know this too,” they’re relying on a closure principle. The zebra case shows why that move can be tricky
- Artificial intelligence – AI systems that reason from data to conclusions face the same logical problems about whether their inferences preserve knowledge or just probability