Philosophy for Kids

What Makes Something Right or Wrong? The Big Question Behind the Rules

Imagine you’re in the school cafeteria. Your friend has just dropped their lunch tray. Food is everywhere. You have two options: help them pick things up, or keep walking because you’re late to meet a study group that really needs you. What’s the right thing to do?

Most of us don’t think very hard about this kind of choice. We just know that helping a friend who’s in trouble is good, and that being late to a meeting isn’t great but isn’t terrible either. We have rules in our heads: “Help your friends,” “Keep your promises,” “Don’t be mean.” These rules feel natural, like they’ve always been there.

But where do these rules actually come from? And what makes a rule a good rule?

A group of philosophers called consequentialists have a radical answer: rules are justified only by their consequences. A rule is good if following it makes the world better for everyone. If a rule makes things worse, it’s a bad rule—no matter how ancient or respected it is.

But here’s where things get complicated. If you try to apply that idea directly to every single choice you make, weird things happen. Let’s see why.

The Trouble with “Just Do Whatever Works Best”

The simplest version of consequentialism says: for every action, look at the results. Pick the action that produces the most good. An action is wrong if there was some other action you could have taken that would have produced more good.

This sounds reasonable. But think about what it demands.

Suppose you have a math test tomorrow. You’ve been studying for weeks. But your friend calls and says they’re really struggling and need help tonight—and honestly, your help would raise their grade more than your own studying would raise yours. The “do whatever works best” approach says you should help them. You’d be doing wrong if you didn’t.

What about your new bike? A kid in your neighborhood doesn’t have one, and they’d get more joy from it than you do now that you’ve had it for a while. Should you give it away? The “do whatever works best” approach seems to say yes.

What about your snack, your time, your birthday money? There are always people who could benefit more from these things than you do. If every action must produce the most good possible, you’d be required to give and give and give until the point where giving more would hurt you more than it helps others. That’s a lot of giving.

These aren’t just thought experiments. Philosophers call this the “demandingness objection”—the worry that consequentialism asks too much of us. It doesn’t leave room for your own projects, your own relationships, your own life.

Most people find this deeply uncomfortable. It seems like something has gone wrong if morality demands that you give away everything you have to help strangers. But the logic of “just do whatever works best” pushes you in that direction.

And there’s another problem too. Imagine a world where everyone is constantly calculating: “Should I keep this promise? Let me check if breaking it would produce slightly more good.” Trust would evaporate. You could never count on anyone, because they might always decide that today is the day when lying to you works out better for everyone.

So the simple version of consequentialism has real problems. But that doesn’t mean the basic idea—that consequences matter—is wrong. Maybe the problem is where we apply the test.

The Big Idea: Rules, Not Acts

Here’s a different approach. Instead of asking “Does this action produce the best results?” ask “What rules would produce the best results if everyone followed them?”

This is rule-consequentialism. The idea is simple: you figure out which set of moral rules would make the world best, if most people accepted and followed them. Then you follow those rules. When a rule tells you not to steal, you don’t steal—even if in this one case, stealing might help. When a rule tells you to keep promises, you keep them—even if breaking this one promise would be convenient.

Why follow rules even when breaking them would seem to help more? Because the whole point of having rules is that you can’t just bend them whenever you feel like it. The benefits come from everyone knowing roughly what to expect. Society runs better when people can trust that others won’t steal their stuff, lie to them, or break promises at will.

This is how you avoid the “demandingness” problem too. A set of rules that demanded you give away everything you own to help strangers would be extremely hard to teach people to follow. Children start out caring mostly about themselves and a few others. Getting them to care equally about everyone in the world would require enormous effort—and probably wouldn’t even work. The costs of teaching such demanding rules would be so high that they’d outweigh the benefits.

So rule-consequentialism tends to favor rules that are:

  • Not too numerous
  • Not too complicated
  • Not too demanding

Rules like “Don’t harm innocent people,” “Keep your promises,” “Tell the truth,” “Help your family and friends,” “Be generally helpful to others.” These are rules that can actually be taught, learned, and followed. And a society where most people follow these rules is a society with a lot more good in it.

But Wait—Doesn’t This Collapse Back Into Simple Consequentialism?

There’s a famous objection to rule-consequentialism. It goes like this:

Imagine you have the rule “Don’t steal.” Now suppose you’re in a situation where stealing would produce really good results—maybe you need to steal bread to feed someone who’s starving. A simple consequentialist would say: steal the bread. But the rule-consequentialist says: follow the rule.

But wait—couldn’t we just change the rule to “Don’t steal, except when someone is starving”? Then the rule-consequentialist could steal the bread and follow the rule. And if that works, why stop there? Why not keep adding exceptions: “Don’t steal, except when someone is starving, or when the person you’re stealing from won’t miss it, or when…”

If you add enough exceptions, the rule eventually tells you to do exactly what the simple “do whatever works best” approach tells you. The rule-consequentialist ends up requiring the same actions as the simple consequentialist. This is called the “collapse objection”—the worry that rule-consequentialism just collapses into the simpler view it was supposed to replace.

Rule-consequentialists have a good reply to this. Rules with too many exceptions are hard to learn, hard to teach, and easy to abuse. If you’re allowed to steal whenever you can convince yourself that an exception applies, people lose confidence that their property is safe. The whole point of having rules is to create stability and trust. A rule with a thousand tiny exceptions doesn’t do that.

So rule-consequentialists accept that sometimes their rules will lead to actions that don’t maximize good in that particular case. That’s the price of having rules that actually work in the real world. And they think the price is worth paying.

The Hard Question: What Counts as “Good”?

So far we’ve talked about “producing good consequences” as if everyone agrees on what “good” means. They don’t.

The earliest consequentialists—philosophers like Jeremy Bentham and John Stuart Mill—thought good was simply pleasure. An action was good if it increased pleasure and reduced pain. This view is called hedonism.

But is pleasure really all that matters? Think about someone who wants to know the truth about something, even if the truth is painful. Or someone who wants to achieve something difficult, even if the process isn’t pleasant. Or someone who cares about their friends purely for the friends’ sake, not because the friendship gives them pleasure.

Most people care about things beyond their own pleasure: knowledge, achievement, friendship, fairness. These things matter to us even when they don’t make us feel good. So hedonism seems too narrow.

Another theory says that what’s good is getting what you want—the desire-fulfillment theory. If you desire something and get it, that’s good for you, even if you never know about it and it gives you no pleasure.

But this seems too broad. You might desire that starving children in another country get food. That’s a good thing to happen, but does the fulfillment of your desire benefit you? It seems strange to say that your welfare is improved because children you’ll never meet are fed. And what about silly desires? If you want to count every blade of grass on your street, is the fulfillment of that desire actually good for you? Probably not.

This leads to a third view: some things are just good for people regardless of what they desire—things like pleasure, knowledge, achievement, friendship, and autonomy. This is called the objective list theory. It says there’s a list of things that make any human life go well, and promoting these things is what morality is about.

Different versions of consequentialism include different things on this list. Some include fairness and equality. Others don’t, arguing that equality is already taken care of by the way consequences work (since giving resources to people who have less usually produces more benefit per unit).

The Equality Puzzle

This brings up a deep puzzle. Imagine two possible worlds:

World A: 10,000 people are very badly off (1 unit of welfare each) and 100,000 people are doing pretty well (10 units each).

World B: Everyone is doing okay (8 and 9 units respectively), but the total amount of welfare is lower.

Worst-off groupBetter-off groupTotal welfare
World A1 each (10,000 people)10 each (100,000 people)1,010,000
World B8 each (10,000 people)9 each (100,000 people)980,000

Many people think World B is better, even though it has less total welfare. It’s more equal. Nobody’s suffering.

But now consider:

World C: Everyone is equally badly off (1 each). Would that be better than World A, where some people are doing well?

Worst-off groupBetter-off groupTotal welfare
World A1 each (10,000 people)10 each (100,000 people)1,010,000
World C1 each (10,000 people)1 each (100,000 people)110,000

Here’s the disturbing question from philosopher Derek Parfit: Is it good to make everyone equally blind if the only way to achieve equality is to blind the sighted? Most people say no. Making everyone worse off just to achieve equality seems wrong.

This suggests that what we really care about isn’t equality for its own sake. What we care about is helping the worst-off people. We want to raise people up, not drag others down. This view is called prioritarianism: benefits matter more the worse off the person is who receives them.

Prioritarianism feels intuitively right to many people. But it raises hard questions. How much more should a benefit to the worst-off person count? Ten times more? Five times? Nobody knows how to set this number without being arbitrary. And there’s a worry that giving extra weight to some people’s welfare violates the basic impartiality that made consequentialism attractive in the first place.

So What’s the Right Answer?

Philosophers are still arguing about all of this. Rule-consequentialism offers a way to keep the basic insight of consequentialism—that consequences matter—while avoiding the most troublesome implications. It says: don’t calculate every action. Instead, figure out which rules would make the world best if everyone followed them, and then follow those rules.

This approach can explain why we have rules against lying, stealing, and harming others, and rules requiring us to help friends and family. It can explain why these rules don’t have a million tiny exceptions. And it can explain why morality doesn’t demand that we give away everything we own.

But it’s still a work in progress. Critics point out that rule-consequentialism seems to make the justification of moral rules depend on messy empirical facts about human nature and society. If human nature were different, the rules would be different. Does that mean there are no absolute moral truths?

And there’s the deeper question: Why should we care about promoting good consequences at all? If you’re not already convinced that making the world better is the point of morality, rule-consequentialism might not give you a reason to care.

These are live debates. Nobody has settled them. But the questions are worth thinking about, because they’re really questions about what kind of world we want to live in, and what kind of people we want to be.


Key Terms

TermWhat it does in this debate
ConsequentialismThe view that the rightness or wrongness of actions depends entirely on their consequences
Rule-consequentialismThe view that we should follow the rules whose general acceptance would produce the best consequences, rather than calculating each action separately
Act-consequentialismThe view that each action should be judged directly by whether it produces the best consequences
HedonismThe claim that only pleasure and pain matter for well-being
Desire-fulfillment theoryThe claim that getting what you want is good for you, even if you don’t know about it
Objective list theoryThe claim that certain things (knowledge, friendship, achievement, etc.) are good for people regardless of what they desire
PrioritarianismThe view that benefits to worse-off people matter more than the same-sized benefits to better-off people
Demandingness objectionThe complaint that some moral theories require us to sacrifice too much for others

Key People

  • Jeremy Bentham (1748–1832): An English philosopher who argued that right and wrong should be decided by “the greatest happiness principle”—maximizing pleasure and minimizing pain for everyone affected.
  • John Stuart Mill (1806–1873): A British philosopher who developed Bentham’s ideas and argued that some pleasures (intellectual ones) are higher and better than others.
  • Derek Parfit (1942–2017): A contemporary philosopher who raised the “levelling down” objection to equality and helped develop prioritarianism.
  • Richard Brandt (1910–1997): An American philosopher who developed a detailed version of rule-consequentialism that considered the costs of teaching and internalizing moral rules.

Things to Think About

  1. If you had to choose between a world where everyone follows strict rules (no lying, no stealing, keep promises) but sometimes misses opportunities to help, and a world where everyone calculates each action separately and sometimes breaks rules to help more—which world would you rather live in? Why?

  2. The “demandingness objection” says some moral theories ask too much of us. But maybe the problem isn’t the theory—maybe we’re just selfish and don’t want to admit how much we should give. How can we tell the difference between a theory that’s too demanding and a theory that’s just challenging?

  3. Is there something wrong with “levelling down”—making everyone worse off to achieve equality? Or is there something genuinely valuable about equality even when it makes no one better off? What does your answer suggest about what you really value?

  4. If moral rules are justified by their consequences, and human nature or circumstances change, the rules might need to change too. Does that mean there are no moral truths that apply to all people at all times? Or can a rule be “true for everyone” even if it wouldn’t work in every possible world?

Where This Shows Up

  • In arguments about fairness: When someone says “rules are rules” even when breaking one would help someone, they’re taking a rule-consequentialist kind of position. When someone says “but this time it’s different,” they’re arguing like an act-consequentialist.
  • In school discipline: Schools have rules, and teachers sometimes have to decide whether to enforce them strictly or make exceptions. The debate about whether to have “zero tolerance” policies or case-by-case judgment mirrors the philosophical debate.
  • In political debates: When people argue about whether we should have strict laws (even if they sometimes cause harm) or flexible laws that let officials make exceptions, they’re engaging with the same tension between rule-following and case-by-case judgment.
  • In everyday life: When you decide whether to tell a “white lie” to spare someone’s feelings, you’re facing the same question: follow the rule “don’t lie,” or calculate whether this particular lie produces better results?