What If Every Moral Theory Could Be Turned Into Consequentialism?
Imagine you’re trying to decide whether to break a promise. You promised your friend you’d help them study for tomorrow’s big test, but now another friend calls in a panic—they’re about to fail unless you help them right now. You can’t do both. What should you do?
Most people would say: you should keep your promise. It’s wrong to break a promise, even if breaking it would help someone else. That seems pretty straightforward.
But then a philosopher comes along and says: “Actually, breaking the promise is wrong because the outcome in which you break it is worse than the outcome in which you keep it. The universe just goes better when promises are kept.”
Wait a second, you think. Isn’t that backwards? Isn’t breaking a promise wrong because you made a promise—not because of some abstract fact about which outcome is “better”? Aren’t these two different ways of thinking about morality?
This disagreement points to something philosophers have argued about for a long time. And recently, some of them have tried a surprising move: they’ve tried to show that any moral theory can be turned into a version of consequentialism—the view that says the right thing to do is always whatever brings about the best outcome. They call this “consequentializing,” and it’s led to a heated debate about whether the whole argument between consequentialists and their opponents is even real.
The Basic Idea
Here’s the core puzzle. Consequentialism (in its simplest form) says: an act is right if and only if its outcome is at least as good as the outcome of any alternative act you could have done instead. In other words, you should always do whatever makes the world go best.
The problem is that this seems to give the wrong answers in lots of cases. Remember the promise-breaking example: if breaking your promise helps someone else just as much as keeping it would (or even a little more), consequentialism seems to say you should break it. But most people think you shouldn’t.
So philosophers have tried to fix consequentialism. One way is to change what counts as a “good outcome.” Maybe outcomes aren’t just about happiness or pleasure. Maybe outcomes that involve promise-breaking are worse than outcomes that don’t, even if the happiness levels are the same. So the consequentialist can say: breaking a promise is wrong because it brings about a worse outcome—one that includes the badness of a broken promise.
This sounds like it might work. But then another problem appears. Suppose you can break your promise to prevent two other people from breaking their promises. If promise-breaking is bad, then two broken promises is worse than one. So bringing about an outcome with two broken promises is worse than bringing about an outcome with one broken promise. That means you should break your promise to prevent the two others from breaking theirs. But most people think that’s still wrong—you shouldn’t break your promise even to prevent more promise-breaking.
How can the consequentialist handle this? Their move is to say that outcomes are ranked differently for different people. It’s not that there’s one single ranking of how good outcomes are for everyone. Instead, each person has their own ranking. And you, the person who made the promise, have special reasons to care about your own promises. So for you, the outcome where you keep your promise is better than the outcome where you break it—even if breaking it would prevent two other people from breaking theirs. For a bystander, the rankings might be different.
This allows the consequentialist to say: you should keep your promise because, from your perspective, keeping it brings about a better outcome than breaking it would.
Why Would Anyone Want to Do This?
There are three different reasons philosophers have given for wanting to consequentialize a moral theory, and they lead to three different projects.
Earnest consequentializing. Some philosophers really do think consequentialism has something uniquely compelling about it. They think there’s a deep insight in the idea that you should always act to bring about the best outcome—and they want to save that insight while avoiding the crazy conclusions that simple versions of consequentialism lead to. They’re trying to build a better version of consequentialism, one that matches our commonsense moral intuitions.
Notational consequentializing. Other philosophers have a different goal. They want to show that the whole debate between consequentialists and non-consequentialists is empty—that every plausible moral theory can be rewritten in consequentialist terms (and also rewritten in non-consequentialist terms). If that’s true, then the two sides aren’t really disagreeing about anything substantive. They’re just using different words to say the same things, like saying “100 degrees Celsius” versus “212 degrees Fahrenheit” for the temperature of boiling water.
Pragmatic consequentializing. Still other philosophers don’t care about the philosophical debate at all. They just want to use the tools of decision theory—a branch of math that helps you make rational choices when you don’t have perfect information—to figure out what their preferred moral theory says in tricky situations. Consequentialist theories are easier to plug into decision theory, so they consequentialize their theory as a practical tool.
Is This Just a Cheap Trick?
A lot of philosophers think consequentializing is a gimmick. Here’s why.
Remember how the consequentializer handles promise-breaking? They say: “Promise-breaking is bad, so outcomes with broken promises are worse than outcomes without them.” But critics say this is just taking whatever the non-consequentialist thinks is wrong and calling it “bad for outcomes” instead. It’s like having a debate where one person says “X is wrong” and the other says “No, X is wrong because it produces a bad outcome, and I just define ‘bad outcome’ as ‘outcome where X happens.’” There’s no real substance there—you’re just playing with words.
The notational consequentializer doesn’t mind this accusation. They’re happy to admit that the equivalent theories are just different ways of saying the same thing. That’s their whole point!
But the earnest consequentializer wants more. They want to show that their version of consequentialism actually explains why certain acts are wrong, not just relabels them. And they think they can do this by pointing to real reasons for preferring some outcomes over others. For instance, when they say you should prefer the outcome where you keep your promise over the outcome where you break it, they can give a real reason: you have special responsibility for your own promises in a way you don’t for other people’s promises. This isn’t just a random move to get the right answer—it’s grounded in something that makes sense.
Who’s Right?
The debate is still very much alive. Here are some of the main points of disagreement.
Some philosophers say that even if you can consequentialize a theory, the resulting theory gets the explanation backwards. They think promise-breaking is wrong because it’s a broken promise, not because the outcome is worse. The badness of the outcome comes from the wrongness of the act, not the other way around. The consequentializer disagrees and thinks they can give a better explanation.
Other philosophers argue that some features of commonsense morality simply can’t be consequentialized. For instance, what about a situation where every possible action you could take is wrong? Some moral theories allow for this kind of “prohibition dilemma.” But can a consequentialist theory ever say that every available act is impermissible? Only if they can have a ranking where each outcome is outranked by some other outcome—which might require a circular or intransitive ranking. Whether that’s possible is controversial.
And even if you can consequentialize everything, does that mean the distinction between consequentialism and non-consequentialism is empty? Some philosophers argue that it doesn’t, because the two kinds of theories disagree about what “active ingredients” make an act right or wrong—not just about how to describe those ingredients. The consequentialist says the active ingredient is always about the outcome; the non-consequentialist says it’s about something else (like whether the act shows proper respect for others). Simply relabeling doesn’t make this disagreement go away.
Where This Leaves Us
Arguing against consequentialism turns out to be trickier than you might think. You can’t just point to an example where consequentialism seems to give the wrong answer—like “it’s wrong to kill one person to save five”—because the consequentialist can always reply: “You’re assuming a particular ranking of outcomes that I don’t have to accept. Maybe from your perspective, killing the one is worse than letting the five die. That’s still consequentialism.”
So the real debate isn’t about whether consequentialism gives the right answers in specific cases. It’s about deeper questions: What makes an act right or wrong? Do outcomes matter more than anything else, or do other features of actions matter in their own right? And when two theories give the same answers about what’s right and wrong but give different explanations for why those answers are correct, are they really different theories or just the same theory in different words?
These questions don’t have settled answers. Philosophers keep arguing about them, and the arguments have gotten quite sophisticated. But the basic puzzle—whether every moral theory can be turned into consequentialism, and what that would mean—is one that cuts to the heart of how we think about right and wrong.
Key Terms
| Term | What it does in this debate |
|---|---|
| Consequentialism | The view that an act is right or wrong depending entirely on whether its outcome is good enough compared to alternatives |
| Consequentializing | The project of taking a non-consequentialist theory and rewriting it so it looks like a version of consequentialism |
| Agent-relative ranking | The idea that outcomes can be ranked differently depending on whose perspective you’re using—what’s best for you might not be best for someone else |
| Footian Procedure | A method for generating a consequentialist version of any non-consequentialist theory by just matching their verdicts case by case |
| Coherentist Procedure | A method for building a consequentialist theory by revising both our judgments about outcomes and our judgments about right and wrong until they fit together consistently |
| Deontic verdict | A judgment about whether an act is right, wrong, or permissible |
| Extensional Equivalence Thesis | The claim that every plausible non-consequentialist theory has a consequentialist counterpart that gives exactly the same answers in every possible situation |
Key People
- Philippa Foot – A philosopher who argued that there’s something uniquely compelling about consequentialism (the thought that you should never prefer a worse state of affairs to a better one), but also that this thought might be wrong.
- James Dreier – A philosopher who argued that the distinction between consequentialism and non-consequentialism might be empty, because every theory can be rewritten in either form.
- John Stuart Mill – A 19th-century philosopher who tried to fix utilitarianism (a type of consequentialism) by saying that some pleasures are higher quality than others, not just more intense.
- W. D. Ross – A philosopher who argued that the badness of an outcome comes from the wrongness of the act that produces it, not the other way around.
- Campbell Brown – A philosopher who argued that some features of non-consequentialist theories (like prohibition dilemmas) simply cannot be consequentialized.
Things to Think About
-
If you could take any moral theory and rewrite it so it looks like consequentialism, does that mean the original theory was “really” consequentialism all along? Or does how you explain right and wrong matter more than what verdicts you reach?
-
Imagine someone who says: “I don’t care about the overall goodness of outcomes. I only care about keeping my promises.” If a consequentialist replies: “But keeping your promise is what brings about the best outcome from your perspective,” has the first person been misrepresented? Or is the consequentialist just redescribing the same thing?
-
Suppose two friends both believe stealing is wrong, but for different reasons. One thinks it’s wrong because it produces bad consequences; the other thinks it’s wrong regardless of consequences. If they agree on every single case about whether stealing is wrong, are they really disagreeing about anything important?
-
If you could use a tool (like decision theory) to figure out what your moral theory says in complicated situations, would it matter if the tool required you to rewrite your theory in a different form? Or would the results be just as valid?
Where This Shows Up
- Political debates – When people argue about whether a policy is right because of its consequences (like economic growth) or because of principles (like rights), they’re replaying versions of this debate.
- Everyday moral decisions – When you think about whether to tell a white lie to avoid hurting someone’s feelings, you’re weighing consequences against principles in exactly the way this debate is about.
- Artificial intelligence ethics – Engineers who design AI systems need to decide whether to use a “consequentialist” framework (maximize good outcomes) or a “deontological” one (follow rules). The question of whether these are really different approaches matters for how they build the systems.
- Legal reasoning – Courts sometimes reason about consequences (what will happen if we allow this) and sometimes about principles (people have a right to this regardless). Philosophers who think these are just different ways of talking about the same thing would say the conflict isn’t real.