What Counts as Rational? The Puzzle of Bounded Rationality
Imagine you’re playing a game where you have to guess which of two cities has a larger population. You don’t have internet access or a phone. You just have to think. Most people, when asked this question, use something like this: if you’ve heard of one city but not the other, you guess that the one you’ve heard of is bigger. And this actually works pretty well.
Now imagine you’re trying to decide what to have for lunch. You don’t calculate the perfect balance of nutrition, cost, and enjoyment. You just pick something that seems “good enough” and move on.
These are examples of what philosophers and cognitive scientists call bounded rationality. The idea sounds simple: we human beings are not perfectly rational calculators. We have limited time, limited information, and limited brainpower. So we make decisions using shortcuts, rules of thumb, and strategies that are “good enough” rather than perfect.
But here’s where it gets weird. The big question that this article is about is: Does being bounded mean we’re irrational? Or could it be that the “perfectly rational” standard is actually the wrong way to think about how smart creatures like us should make decisions?
This question has split researchers into two camps that have been fighting about it for decades. Let’s explore what the puzzle is and why it matters.
The Myth of the Perfect Calculator
For a long time, economists and philosophers built their theories around a hypothetical creature sometimes called homo economicus — “economic man.” This imaginary being has complete information about every option available, perfect foresight about what will happen if they choose each option, and unlimited ability to calculate which option will give them the best outcome. They never forget anything. They never get tired. They never make mistakes.
This creature is supposed to be the model of perfect rationality. And for a long time, many people thought that actual human beings, when they’re being rational, ought to behave roughly like this creature would.
There’s a famous set of rules that this perfect calculator is supposed to follow, called expected utility theory. The idea is that if your preferences satisfy certain mathematical properties (they’re complete, meaning you can compare any two options; transitive, meaning if you prefer A to B and B to C, you prefer A to C; and a few others), then your behavior can be represented as maximizing something called expected utility. You act as if you’re assigning numbers to outcomes and choosing the option with the highest expected number.
This sounds reasonable in principle. But here’s the problem: real human beings don’t actually behave this way.
What Real People Actually Do
In the 1950s, the economist and political scientist Herbert Simon started asking an uncomfortable question. Consider the game of chess. The number of possible chess positions is larger than the number of atoms in the universe. No human being — and no computer, for that matter — can calculate the “optimal” move by considering every possible future. And yet human beings play chess. They use strategies, heuristics, shortcuts. They look a few moves ahead and make a judgment.
Simon argued that this isn’t a bug in human cognition — it’s a feature. He proposed that instead of trying to maximize (find the absolute best option), people typically satisfice — they look for an option that meets some minimum threshold of acceptability and then stop searching. “Good enough” replaces “perfect.”
Simon called this approach procedural rationality. Instead of asking whether your final choice is the best possible, you ask whether your process for making decisions is sensible given your limitations.
The Framing Problem
In the 1970s and 1980s, psychologists Daniel Kahneman and Amos Tversky conducted a famous series of experiments showing that people systematically violate the rules of expected utility theory. Here’s one of their most famous examples:
Imagine a disease is expected to kill 600 people. You have to choose between two programs:
- Program A: 200 people will be saved.
- Program B: There’s a 1/3 chance that 600 people will be saved, and a 2/3 chance that nobody will be saved.
Most people choose Program A. They prefer the sure thing.
But now consider the same problem framed differently:
- Program C: 400 people will die.
- Program D: There’s a 1/3 chance that nobody will die, and a 2/3 chance that 600 people will die.
Most people now choose Program D. Even though A and C are mathematically identical (same outcome), and B and D are mathematically identical (same odds), people’s preferences flip depending on how the problem is described.
This is called a framing effect. It doesn’t make sense if you’re a perfectly rational expected-utility maximizer. But it makes perfect sense if you’re a human being who cares about losses and gains relative to some reference point — and who feels losses more painfully than equivalent gains.
Kahneman and Tversky developed prospect theory to model how people actually make decisions under risk. Their key findings:
- Reference dependence: People evaluate outcomes relative to a reference point (like their current situation), not in absolute terms.
- Loss aversion: Losses hurt about twice as much as equivalent gains feel good.
- Diminishing sensitivity: The difference between $0 and $100 feels bigger than the difference between $900 and $1000.
- Probability weighting: People overestimate small probabilities and underestimate large ones.
These observations don’t just describe mistakes. They describe a system that works reasonably well in the real world — even if it fails the tests of perfect mathematical rationality.
The Two Schools of Heuristics
This brings us to the big disagreement that runs through this whole topic. There are two main groups of researchers who study how people make decisions with limited information and limited time. They agree on many of the facts about what people do, but they disagree sharply about what it means.
The Biases and Heuristics School
Kahneman and Tversky’s tradition sees human decision-making as prone to systematic errors — biases — that arise from using mental shortcuts called heuristics. On this view, the normative standard (the ideal we should aim for) is something like expected utility theory or Bayesian probability. When people deviate from this standard, they’re making mistakes. Sometimes those mistakes are understandable, but they’re still mistakes.
For example, consider the conjunction fallacy. Kahneman and Tversky gave people this description:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was very concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Then they asked people to rank the probability of different statements about Linda. One was “Linda is a bank teller.” Another was “Linda is a bank teller and is active in the feminist movement.”
Logically, the probability of two things being true together (Linda being a bank teller AND a feminist) can’t be higher than the probability of just one of them being true (Linda being a bank teller). But most people rated the conjunction as more likely. This looks like a clear violation of probability theory — a bias.
The Fast and Frugal School
The other camp, led by Gerd Gigerenzer and the ABC Research Group, sees things very differently. They argue that heuristics aren’t necessarily errors — they’re smart adaptations to a world where we have limited time and information.
Take the “recognition heuristic” we started with: if you have to guess which of two cities is bigger, and you recognize one but not the other, guess the one you recognize. This sounds stupid — shouldn’t you consider more information? But it turns out this heuristic works remarkably well in many real-world situations. In fact, it can outperform more complex strategies.
Gigerenzer’s group argues that what looks like a bias in a psychology experiment might be an adaptive strategy in the real world. The question isn’t “does this heuristic violate the rules of probability?” — it’s “under what conditions does this heuristic lead to good decisions?”
This is called ecological rationality. A decision-making strategy is rational not if it satisfies abstract mathematical axioms, but if it fits the structure of the environment where it’s being used.
Less Is More
Here’s a genuinely surprising result that comes from this second school: sometimes having less information or using simpler strategies leads to better decisions.
For example, there’s something called tallying. Instead of carefully weighting different factors to make a prediction, you just count up how many factors point in each direction and go with the majority. This simple rule often performs as well as or better than more complex statistical models, especially when you have limited data.
In machine learning, this is related to the bias-variance tradeoff. A very complex model might fit your existing data perfectly but fail to generalize to new situations (this is called overfitting). A simpler model with some bias — some systematic tendency to be wrong in a particular way — might actually make better predictions overall because it doesn’t get confused by noise in the data.
Or consider the case of tit-for-tat in game theory. In the 1980s, political scientist Robert Axelrod held a tournament where computer programs played the prisoner’s dilemma (a game about cooperation and betrayal) against each other repeatedly. The simplest strategy — just copy whatever your opponent did last time — beat all the more complex strategies. Sometimes simple is better.
The Problem of Perfect Information
There’s another deep issue here that gets at the foundations of rationality itself. The standard theories of rational decision-making assume logical omniscience — that you know all the logical consequences of everything you believe. This sounds reasonable until you realize what it means.
Do you know whether the 10,000th digit of pi is a 7? If you’re logically omniscient, yes — because pi’s digits are determined by the definition of pi, and if you understand the definition, you should know all its consequences. But nobody actually knows the 10,000th digit of pi without calculating it. And calculating it takes time and energy.
The mathematician and statistician Leonard Savage, who did foundational work on expected utility theory, noticed this problem in the 1960s. He asked: if you have to bet on a digit of pi, should you spend time computing it? The theory says you should act on all the logical implications of what you know — but computing those implications is costly. Savage worried that trying to incorporate the cost of thinking into the theory might lead to paradox.
Nobody has fully solved this problem. It raises the possibility that all rationality is bounded — that perfect rationality is impossible not just for humans, but for any physical system, because any computation uses energy and takes time, and those costs are real.
So, Are We Irrational?
This is where the article leaves us with its central question unresolved — and honestly, that’s where philosophy lives. Different researchers give different answers:
-
The coherence camp says we’re irrational when our beliefs and preferences contradict each other (like preferring A to B, B to C, and C to A in a cycle). By this standard, people sometimes are irrational, though less often than you might think.
-
The accuracy camp says we’re rational when our decisions lead to good outcomes in the actual environments where we make them. By this standard, heuristics can be highly rational even when they violate the rules of probability.
-
The interpretive camp says rationality is something we grant to other people to make sense of their behavior. If we can understand why someone did something — even if we disagree with it — we’re treating them as rational.
-
The pragmatist camp says rationality is about the process of changing your mind in response to doubt and experience, not about the static consistency of your beliefs at any moment.
The philosopher Roy Sorensen once suggested that rationality might be like cleanliness — it’s just the absence of dirt (or irrationality). But there are many ways to be dirty, and no single positive account of what cleanliness is.
Why This Matters
You might think this is all academic navel-gazing, but the question of bounded rationality has real-world consequences. It affects:
-
How we design public policy. If people are predictably irrational, should governments “nudge” them toward better decisions (like automatically enrolling people in retirement savings plans)? Or is that paternalistic?
-
How we build artificial intelligence. Should AI systems try to be perfectly rational calculators, or should they use heuristics like humans do? (Current AI systems actually use a lot of bounded rationality techniques.)
-
How we understand ourselves. Are we fundamentally flawed reasoners who need to be saved from our own stupidity? Or are we remarkably adaptive creatures who have evolved strategies that work well in the environments we actually face?
The philosopher Jonathan Bennett once wrote about bees. Bees aren’t consciously rational, but their behavior is appropriate to their needs — they find flowers, communicate their location, build hives. Bennett asked whether we should say bees are rational in some sense. His point was that rationality might not be about following abstract rules. It might be about behaving effectively in a world that presents you with limited information and limited time.
And that’s exactly the question this whole debate is about. What does it mean to think well when you’re not a god — when you’re a creature with a finite brain, living in a complex world, making decisions on the fly?
Nobody has settled this question. But asking it — and taking it seriously — changes how you see everyday decisions. The next time you pick a lunch spot, guess a city’s population, or make a snap judgment about someone, you’re not just being sloppy. You’re doing what every bounded creature has to do: making the best decision you can with what you’ve got.
Appendix
Key Terms
| Term | What it does in this debate |
|---|---|
| Bounded rationality | The idea that real decision-making is shaped by real limitations (time, information, computing power), and that good decision-making must account for these limits |
| Expected utility theory | A mathematical framework for making perfect decisions under uncertainty, used as the standard that boundedly rational decisions are measured against |
| Satisficing | A strategy that picks the first option meeting a minimum threshold, rather than searching for the absolute best option |
| Heuristic | A mental shortcut or rule of thumb used to make decisions quickly with limited information |
| Ecological rationality | The idea that a decision strategy is rational if it fits the structure of the environment where it’s used, not just if it satisfies abstract mathematical rules |
| Framing effect | The phenomenon where the same choice presented differently (e.g., as a gain vs. a loss) leads people to make different decisions |
| Loss aversion | The tendency for losses to feel psychologically more intense than equivalent gains, making people more cautious about potential losses |
| Logical omniscience | The unrealistic assumption that a rational agent knows all logical consequences of everything they believe |
Key People
-
Herbert Simon (1916–2001) — An economist and political scientist who first proposed the concept of bounded rationality and the strategy of satisficing, arguing that real humans can’t and shouldn’t try to be perfect calculators.
-
Daniel Kahneman (1934–2024) and Amos Tversky (1937–1996) — Psychologists who showed through experiments that people systematically violate the predictions of expected utility theory, developing prospect theory and launching the “heuristics and biases” research program.
-
Gerd Gigerenzer (born 1947) — A psychologist who leads the “fast and frugal heuristics” school, arguing that simple heuristics can be rational in the right environments and that the “biases” identified by Kahneman and Tversky are often adaptive.
-
L.J. Savage (1917–1971) — A mathematician and statistician who helped develop expected utility theory, but who also worried about whether the theory could account for the costs of thinking and computing.
-
Robert Axelrod (born 1943) — A political scientist who ran tournaments showing that a very simple strategy (tit-for-tat) could outperform complex strategies in repeated games.
Things to Think About
-
If you had to design a decision-making system for a robot that had limited battery power and memory, would you give it the rules of expected utility theory, or something simpler? Why?
-
Suppose a heuristic works really well 95% of the time but fails badly 5% of the time. Is it rational to use it? Does the answer depend on what’s at stake in the 5% of cases where it fails?
-
People sometimes say “ignorance is bliss” — but could ignorance ever be rational? Are there situations where knowing less might lead to better decisions?
-
If someone offered you free information before you make a decision, should you always take it? Is there a case where having more information could actually hurt your decision-making?
Where This Shows Up
-
In school: When you decide which homework assignment to do first based on which seems “good enough” to start, rather than calculating the optimal order — that’s satisficing.
-
In sports: When a baseball player catches a fly ball by running in a way that keeps the ball at a constant angle in their vision (the LOT heuristic), they’re using a boundedly rational strategy that works better than calculating the trajectory.
-
In technology: Recommendation algorithms on Netflix or YouTube use bounded rationality — they don’t try to predict what you’d absolutely love; they find things that are “good enough” based on limited information.
-
In law and policy: Courts sometimes use simple rules (like “innocent until proven guilty”) rather than trying to calculate exact probabilities, because the simple rule works well in practice and is easier to apply consistently.
-
In your own mind: Next time you catch yourself making a snap judgment or using a rule of thumb, notice that you’re doing bounded rationality. The question isn’t whether you’re being irrational — it’s whether your shortcut is a smart one for the situation you’re in.