What Are You *Really* Basing That Belief On?
Imagine you believe something. Let’s say you believe your best friend is honest. Why? Maybe it’s because you’ve never caught them in a lie. Maybe it’s because your parents told you they’re a good kid. Maybe it’s because you just have a feeling.
But here’s a question philosophers find surprisingly hard to answer: what does it mean for your belief to be “based on” a reason? Not “is the reason a good one” — that’s a separate question. The question is: what relation has to hold between your reason and your belief for the belief to count as based on that reason in the first place?
This is called the problem of the basing relation. And it matters because, for most philosophers, a belief can’t be justified (or count as knowledge) unless it’s based on the right kind of reason. If your belief that your friend is honest is actually based on wishful thinking, then even if the friend really is honest, your belief might not count as knowledge. You got lucky, but you weren’t thinking straight.
So how do we figure out when a belief is really based on a reason? Philosophers have come up with several different answers, and they still haven’t settled on one.
The Obvious Answer: Causation
The first thing many philosophers thought was simple: your belief is based on a reason when the reason causes the belief. That is, your reason makes you believe what you believe, in the right kind of way.
This sounds straightforward. You see dark clouds, and that causes you to believe it might rain. Your reason (seeing the clouds) caused your belief (that it’ll rain). Seems like a basing relation.
But there’s a problem. Causal chains can go weird. Philosopher Alvin Plantinga came up with a famous example:
Imagine you suddenly see your friend Sylvia walk into the room. This surprises you, so you drop your tea cup, scalding your leg. Now you believe your leg hurts. Your belief that you saw Sylvia caused your belief that your leg hurts — but is your belief that your leg hurts based on the fact that you saw Sylvia? Obviously not. You’re not thinking “I saw Sylvia, therefore my leg hurts.” That would be nonsense. The causation happened, but it wasn’t the right kind of causation.
This is called the problem of deviant causal chains. The causal chain went through dropping the tea and scalding your leg — it went outside your thinking processes. So simply saying “the reason caused the belief” isn’t enough. You need to specify what kind of causation counts.
One philosopher, Ru Ye, proposed a fix: your belief is based on a reason when the reason causes the belief and that causation is itself caused by your belief that the reason supports the belief. In other words, you have to believe (even if not in words) that the reason is a good reason. In the Sylvia case, you don’t believe that seeing Sylvia is a good reason to believe your leg hurts — and that’s why it doesn’t count.
Another philosopher, John Turri, suggested a different fix: the reason’s causing your belief has to manifest your cognitive traits — that is, it has to be the kind of thing your mind does when it’s working properly. Dropping your tea doesn’t manifest any thinking trait; it’s just a clumsy accident.
The Counterfactual Answer: What Would Have Happened
Other philosophers took a different approach. Instead of looking at what actually caused the belief, they asked: what would have caused the belief if things were different?
This idea comes from philosopher Marshall Swain. He was trying to solve a tricky case that philosopher Keith Lehrer invented:
Imagine a superstitious lawyer. A series of terrible murders has happened, and all the evidence points to the lawyer’s client being guilty of the eighth murder. But the lawyer reads the cards (he’s superstitious) and the cards say his client is innocent. So he believes his client is innocent based on the cards. Then, while examining the evidence, he finds a complicated but correct legal argument showing his client is actually innocent. He recognizes the argument is good. But his emotions are so strong — he really wants to believe the murderer has been caught — that the complicated legal argument can’t cause him to believe his client is innocent. Only the cards can do that. Nevertheless, it seems like the legal argument should count as a reason his belief is based on, because he recognizes it’s a good reason.
So Swain said: the legal argument is a pseudo-overdeterminant of the belief. That’s a fancy way of saying: if the actual cause (the cards) hadn’t happened, then the legal argument would have caused the belief. Since that’s the case, the belief counts as based on the legal argument too.
This is a clever idea, but it has problems. Consider a physics student who measures a pendulum’s length and calculates its period. She believes the pendulum has length L, and based on that, she believes it has period P. Her belief about the length is based on her measurement, not on her belief about the period. But according to Swain’s theory, her belief about the period might also count as a reason for her belief about the length — because if the measurement hadn’t happened, she could have calculated the length from the period instead. That seems wrong. The direction of the reasoning matters.
The Meta-Belief Answer: Thinking About Your Thinking
Another group of philosophers said: a belief is based on a reason only when you have a meta-belief about it — that is, a belief about your belief. Specifically, you need to believe that your reason is a good reason to hold your belief.
This is called a doxastic theory (from the Greek word “doxa,” meaning belief). On this view, the superstitious lawyer’s belief is based on the legal argument because he believes the legal argument is a good reason — even though it didn’t cause the belief.
But this has its own problems. First, what about people who don’t have the concept of “a good reason”? Young children and animals seem to base beliefs on reasons, but they probably don’t think “this is a good reason for that belief.” One way around this is to say they can have a non-verbal awareness of the reason being good — like being aware you’re thirsty without putting it into words.
Second, consider Ezekiel, who belongs to a cult and blindly believes everything his leader says. The leader tells him: “Your belief in God is a good reason for everything else you believe.” Ezekiel believes this. But does that really mean all his other beliefs are now based on his belief in God? Probably not. Ezekiel didn’t think through the connection; he was just told.
The Hybrid Answer: Both Causal and Meta-Belief
Given that neither pure causation nor pure meta-belief seems to work on its own, some philosophers have combined them. Keith Korcz proposed a causal-doxastic theory: a belief can be based on a reason either through the right kind of causation or through having the right kind of meta-belief.
The causal part handles cases where you just see something and form a belief without thinking about it — like seeing a tree and believing there’s a tree. The meta-belief part handles cases where you consciously reason about something, like the superstitious lawyer recognizing the legal argument is good.
The hybrid theory has a problem too: it might seem like cheating. If the basing relation is supposed to be one thing, saying it’s really two different things (depending on the situation) might just mean philosophers haven’t figured out the real answer yet.
Why This Matters
So why should you care about the basing relation?
First, because it’s connected to what counts as knowledge. Most philosophers think that for you to know something, you need to have good reasons and your belief needs to be based on those reasons in the right way. If your belief is based on superstition but happens to be true, you don’t really know it — you just got lucky.
Second, because it matters for how we evaluate our own thinking. If you can’t tell what your beliefs are actually based on, you can’t tell whether you’re thinking clearly. Some philosophers have even imagined a “debasing demon” — like Descartes’s evil demon, but instead of making you believe false things, this demon just messes up which reasons your beliefs are based on. You might have all the right reasons and all the right beliefs, but they’re connected wrong. If that’s possible, then maybe you don’t know as much as you think.
Third, because it’s a genuinely weird puzzle. You’ve probably had the experience of not knowing why you believe something. Someone asks “why do you think that?” and you realize you’re not sure. But you still believe it. That’s the mystery of the basing relation: sometimes we don’t even know our own minds well enough to say what our beliefs are based on.
Nobody has fully solved this puzzle. Different philosophers defend different theories, and the debate is still active. But thinking about it might make you more careful about what you believe — and why.
Appendix: Key Terms
| Term | What it does in the debate |
|---|---|
| Basing relation | The relation between a reason and a belief when the belief is held because of that reason |
| Deviant causal chain | A causal connection that goes through the wrong events (like dropping tea) so it doesn’t count as proper basing |
| Pseudo-overdeterminant | A reason that would have caused the belief if the actual cause hadn’t happened, used to explain why some non-causing reasons still count |
| Meta-belief | A belief about another belief — like believing that your reason is a good reason |
| Cognitive trait | A habit or disposition of your mind, like the tendency to trust your senses or to reason logically |
| Doxastic theory | Any theory that says beliefs about reasons (meta-beliefs) are necessary for the basing relation |
| Causal-doxastic theory | A hybrid view: a belief can be based on a reason either through causation or through having a meta-belief |
Appendix: Key People
- Alvin Plantinga — A philosopher who came up with the tea-spilling example (seeing Sylvia) that shows how causal chains can go wrong and fail to establish basing relations.
- Keith Lehrer — Created the “superstitious lawyer” example, which challenges the idea that basing requires actual causation.
- Marshall Swain — Proposed the counterfactual theory using pseudo-overdeterminants to handle cases like the superstitious lawyer.
- Ru Ye — Developed a causal theory where the causation must itself be caused by your belief that the reason supports the belief.
- John Turri — Proposed that the basing relation requires the causation to manifest your cognitive traits (your mind’s normal ways of working).
- Keith Korcz — Developed a hybrid causal-doxastic theory that combines both causal and meta-belief approaches.
Appendix: Things to Think About
-
Can you think of a belief you hold where you’re not sure what it’s based on? Is it possible that your belief is based on something different than what you think it’s based on? How would you find out?
-
The superstitious lawyer case raises a question: can a reason be “your reason” for believing something even if it didn’t cause the belief? What would make it “yours”?
-
If you had to design an experiment to test whether someone’s belief was really based on a particular reason, what would you do? Is this something science could ever settle, or is it purely philosophical?
-
Imagine a robot that processes information perfectly — it takes in data, runs logical rules, and outputs beliefs. Does it have basing relations? Does it matter whether it’s aware of its own reasoning?
Appendix: Where This Shows Up
- Law and courtrooms: Lawyers argue about whether a witness’s belief is “based on” what they actually saw or on something else (like suggestion or prejudice). The whole concept of eyewitness testimony depends on the basing relation.
- Fake news and misinformation: When people believe false things, their beliefs are often based on bad reasons (like a rumor from social media) instead of good reasons (like checking reliable sources). Understanding the basing relation helps explain why correcting misinformation is hard — you can’t just give people the right facts; you have to somehow change what their beliefs are based on.
- Education: When a teacher says “show your work,” they’re asking about the basing relation. They want to know what your answer is based on, not just whether it’s right. This is the same puzzle philosophers are trying to solve.
- Artificial intelligence: When we say an AI “believes” something or “reasons” about it, we’re implicitly using an idea of the basing relation. Understanding what basing means for humans might help us understand whether AI systems can truly be said to have reasons for their outputs.