Choosing What Matters: An Introduction to Consequentialism
You and a friend are splitting the last cookie. You both want it. One obvious way to decide is to flip a coin. Another is to let your friend have it because she had a bad day. Another is to grab it for yourself because you got there first. But here’s a stranger question: what if the right thing to do is simply whatever makes the world best overall? Not best for you, not best for your friend, but best when you add up everyone’s happiness and suffering?
This is the basic idea behind a family of moral theories called consequentialism. The name gives away the core claim: what makes an act right or wrong depends entirely on its consequences — on what actually happens as a result.
That might sound obvious at first. Of course consequences matter. But consequentialists go much further. They say consequences are the only thing that matters for judging whether an act is right or wrong. Not whether you made a promise. Not whether you meant well. Not whether you followed the rules. Just: what happened because of what you did?
This leads to some strange and uncomfortable conclusions. Philosophers have been arguing about them for over two hundred years.
The Original Version: Classic Utilitarianism
The first fully worked-out version of consequentialism was called utilitarianism, developed by three thinkers in the 18th and 19th centuries: Jeremy Bentham, John Stuart Mill, and Henry Sidgwick. Their view had several parts, but here’s the heart of it:
- The only thing that’s good in itself is pleasure (and the only bad thing is pain). This is called hedonism — from the Greek word for pleasure.
- An act is morally right if and only if it produces the greatest total amount of pleasure minus pain, compared to any other act you could do instead.
- Everyone’s pleasure counts equally. Your pleasure matters no more and no less than a stranger’s.
This is often summed up in the slogan “the greatest happiness for the greatest number.” But that slogan is actually misleading. Imagine ten people each get one unit of happiness from a decision, but the one person who loses gets ten units of unhappiness. The total (10 units of happiness minus 10 units of unhappiness = 0) might be worse than doing nothing. So utilitarianism doesn’t just care about making as many people as possible happy; it cares about the total amount of happiness in the world, balancing gains against losses.
The Appeal: Why This Makes Sense
There’s something deeply attractive about this view. It’s simple. It’s impartial — it doesn’t play favorites. It treats every person’s well-being as equally important. And it seems to capture something we already believe: that making the world better is a good thing, and making it worse is a bad thing. Most of us start with the intuition that we should make the world better when we can. The question is whether anything else matters too.
The Problems: Three Big Challenges
Opponents of consequentialism have raised many objections. Three are especially important.
1. The Experience Machine
Imagine a machine that could give you any experience you wanted. You could believe you’re spending time with friends, winning Olympic gold, falling in love, accomplishing great things. You’d feel all the pleasure those experiences bring. But none of it would be real. You’d just be floating in a tank with electrodes attached to your brain.
Would you plug in?
If pleasure and pain were the only things that mattered, you absolutely should. You’d get all the pleasure with none of the pain. But most people say they wouldn’t plug in. They want real friendships, real achievements, real knowledge — not just the feelings of those things. This suggests that there’s more to a good life than just pleasure. Philosopher Robert Nozick invented this thought experiment to show that hedonism can’t be the whole story about what’s valuable.
Consequentialists have responses. Some say the machine can’t actually give you propositional pleasure — the kind of pleasure you feel when something actually happens (like being pleased that your friend really did get better grades). Others say what matters is satisfying your preferences, and most people prefer real life to the machine. But the example has stuck because it captures something many people feel: that there’s value in actually doing and being things, not just in feeling good.
2. The Transplant Case
This is the most famous objection to consequentialism. Here’s the setup:
Five patients in a hospital will die without organ transplants. One needs a heart, another a liver, another a kidney, and so on. In Room 6 is a perfectly healthy person who came in for routine tests. His tissue happens to be compatible with all five patients. A surgeon could cut him up and transplant his organs, saving five lives while killing one.
If the surgeon does this, five people live and one dies. If she doesn’t, five die and one lives. Five lives saved versus one life lost seems to maximize happiness overall. So classic utilitarianism seems to say the surgeon should perform the transplant — and that it would be wrong not to.
Most people find this appalling. The healthy person has a right to life. You can’t just kill one person to save five, even if the numbers add up. This objection suggests that consequentialism ignores something crucial: individual rights. There are things you can’t do to people, even for a good cause.
Consequentialists have tried many responses. Some “bite the bullet” and say the transplant really is morally right — our intuitions just evolved to handle normal situations, not bizarre hypotheticals. Others modify the theory. Some say that killing is worse than letting someone die, so the world with the transplant might actually be worse. Others adopt agent-relative consequentialism, which says that from the doctor’s perspective, her own act of killing has special disvalue, even if an outside observer might think the overall outcome is better. Still others turn to rule consequentialism, which judges acts by whether they follow rules that would have the best consequences if everyone accepted them. A rule allowing doctors to harvest organs from unwilling patients, they argue, would destroy trust in medicine and lead to terrible consequences overall.
This part gets complicated, and philosophers still argue about which responses work. But the fact that consequentialists need such elaborate responses shows how much trouble the Transplant case causes.
3. The Demandingness Problem
Here’s a much more everyday problem. You have $100. You could buy a new pair of shoes, or you could give that $100 to a charity that will use it to save someone’s life. Giving the money would produce much more happiness than buying the shoes. So consequentialism seems to say you must give the money — and that buying the shoes is morally wrong.
But most people don’t think it’s wrong to buy shoes. It might be nice to give to charity, but it’s not required. We usually think there are some things that are good to do but not obligatory — these are called supererogatory acts, meaning “above and beyond the call of duty.”
The problem is that consequentialism seems to demand way too much. Almost everything you do could be replaced with something that helps more people. Watching TV? You could be volunteering at a shelter. Eating a nice dinner? That money could feed a family. Studying philosophy instead of working for an aid organization? You could probably do more good elsewhere. If consequentialism is right, then most of what most people do is morally wrong, because we’re almost never maximizing happiness.
Some consequentialists accept this conclusion and say we really are required to change our lives dramatically. Others try to soften the demand. Satisficing consequentialism says you only need to do enough good, not the most good. Scalar consequentialism says we should judge acts as better or worse without saying any of them is absolutely wrong. Rule consequentialism says that the rules we can realistically internalize don’t require constant self-sacrifice.
But critics say these responses miss the point. The deeper issue is that morality shouldn’t demand that we treat everyone’s interests exactly equally. We’re allowed to care more about ourselves, our families, and our friends. When you save your drowning wife instead of a drowning stranger, you shouldn’t have to calculate whether that maximizes happiness. As philosopher Bernard Williams put it, that would be “one thought too many.”
What Counts as Good? The Problem of Value
All versions of consequentialism need an answer to the question: what makes consequences good? Hedonists say only pleasure and pain. But as the Experience Machine shows, many people think other things matter too: knowledge, freedom, friendship, achievement, justice.
Pluralist consequentialists say multiple things are good. G. E. Moore included beauty and knowledge alongside pleasure. Others add freedom, love, fairness, or moral rights. This makes consequentialism more plausible in some ways — it can explain why lying is wrong even when no one gets hurt (false beliefs are bad in themselves), and why keeping promises matters (broken trust is bad).
But pluralism creates new problems. How do you compare different values? Is more freedom worth less happiness? Can you trade justice for pleasure? Philosophers disagree about whether these comparisons are even possible in some cases.
Some consequentialists give up on adding up values at all. Holistic consequentialism compares whole worlds — the world that results from your act versus the world that results from not doing it — without trying to calculate the values of individual parts. This allows for complexity, but it also makes the theory harder to apply.
Actual vs. Expected Consequences
Another debate: should we judge acts by their actual consequences or their expected consequences?
Suppose Alice finds a runaway teenager who says she needs money to get home. Alice buys her a bus ticket. But the bus has a freak accident, and the teenager dies. If actual consequences determine rightness, then Alice did something wrong, because her act led to death rather than safety. Many people find this absurd. Alice did the right thing; she just got unlucky.
Actual consequentialists bite the bullet: Alice’s act was wrong, but she’s blameless because she couldn’t have known. Expected consequentialists say rightness depends on what Alice could reasonably foresee, not on what actually happened. This view seems more intuitive, but it raises tricky questions about how to define “reasonable.”
Still a Live Debate
Consequentialism has been debated for over two centuries, and philosophers still haven’t settled it. The theory has a powerful appeal: it’s simple, impartial, and grounded in something we all care about — making the world better. But it faces serious challenges: it seems to conflict with our deepest intuitions about rights, fairness, and the special relationships we have with family and friends.
Most philosophers today don’t accept classic utilitarianism in its pure form. But many defend modified versions of consequentialism that try to address these objections. And even those who reject consequentialism often find that they can’t fully escape its pull. The question — what really matters when we decide what’s right? — remains open.
Key Terms
| Term | What it does in this debate |
|---|---|
| Consequentialism | The view that what makes an act right or wrong is only its consequences |
| Utilitarianism | The classic version of consequentialism, which says we should maximize happiness and minimize suffering |
| Hedonism | The claim that pleasure is the only good and pain the only bad |
| Supererogatory | An act that is good to do but not morally required |
| Agent-relative consequentialism | A version that allows consequences to be judged differently depending on whose perspective you take |
| Rule consequentialism | A version that judges acts by whether they follow rules that would have the best consequences if widely accepted |
| Actual vs. expected consequences | Whether rightness depends on what actually happens or on what was reasonable to expect |
Key People
- Jeremy Bentham (1748–1832) — An eccentric English philosopher who created the first systematic version of utilitarianism. He thought all pleasures were equally valuable and could be measured.
- John Stuart Mill (1806–1873) — A British philosopher who was raised as a utilitarian but later tried to improve the theory. He argued that some pleasures (like poetry) are higher quality than others (like games).
- Bernard Williams (1929–2003) — A sharp-tongued British critic of consequentialism who argued it destroys personal integrity and the special bonds between people.
- Robert Nozick (1938–2002) — An American philosopher who invented the Experience Machine thought experiment to challenge hedonism.
Things to Think About
-
Would you plug into the Experience Machine? What does your answer tell you about what you actually value?
-
If you were one of the five patients needing organs, would you want the transplant to happen? If you were the healthy person? Is it possible that what’s right depends on who you are in the case?
-
How much should you be required to give up for strangers? Is it possible to draw a non-arbitrary line between what’s required and what’s just nice to do?
-
Suppose a form of consequentialism could perfectly match your moral intuitions — should that make you trust it, or suspect that it was designed just to match what you already think?
Where This Shows Up
- Everyday arguments — When people say “it’s for the greater good” or “think of the children,” they’re using consequentialist reasoning.
- Medical ethics — Doctors and hospitals constantly weigh benefits against harms when making treatment decisions.
- Public policy — Governments use cost-benefit analysis (a form of consequentialist thinking) to decide which programs to fund.
- Charity debates — Organizations like GiveWell and the effective altruism movement explicitly use consequentialist reasoning to figure out where donations do the most good.