Philosophy for Kids

Should We Use People to Help Others? The Ethics of Medical Research

Imagine you’re on a ship in 1747. You’ve been at sea for months, and your gums are bleeding, your joints ache, and you feel weak. This is scurvy, and it’s killing sailors by the thousands. The ship’s surgeon, James Lind, has an idea. He picks twelve sick sailors and splits them into six groups. Each group gets a different “cure”: cider, vinegar, seawater, or—for two lucky sailors—oranges and lemons. Within a week, the citrus eaters are fine. The others are still dying.

Here’s the strange thing: Lind didn’t give everyone the treatment he suspected would work. He gave some sailors seawater, even though he thought it was worthless. He did this on purpose, because he wanted to prove which treatments worked by comparing them head-to-head.

This is the central puzzle of medical research on humans: Is it okay to use some people to gather information that might help others?

It sounds simple, but it gets weird fast. Most of us think doctors should do what’s best for the patient sitting in front of them. Lind didn’t do that. He sacrificed the interests of some sick sailors to get knowledge that would save countless future sailors. And the results were ignored for fifty years anyway, so those dying sailors suffered for nothing, at least in the short term.

Philosophers have been arguing about this ever since.

What Makes Research Different from Treatment

When your doctor gives you medicine, the point is to help you. When a researcher gives you an experimental treatment, the point is to learn something that might help other people someday. These are different goals, and they can pull in opposite directions.

Here’s the basic problem. Most medical studies involve three kinds of risk:

Absolute net risks happen when a procedure has risks but no chance of helping you. If a healthy volunteer gets a drug just to see if it’s safe, they’re taking a risk for zero personal benefit. Even routine blood draws for research purposes fall into this category—you might get a bruise or an infection, but you gain nothing medically.

Relative net risks happen when a study gives you a treatment that’s worse than what you could get outside the study. Imagine a trial comparing a cheap drug to an expensive one. If the expensive one works better, getting the cheap one is worse for you personally—even if both are “safe.”

Indirect net risks happen when an experimental treatment interferes with other medications you’re taking, or when research injuries lead to complications that require risky follow-up care.

Almost everyone agrees that exposing people to these kinds of risks requires some kind of justification. The question is: what counts as a good enough reason?

The Patient vs. The Researcher

One influential view says that doctors should treat research participants the same way they treat patients. On this view, researchers shouldn’t do anything to participants that a good doctor wouldn’t do to a patient. That means no procedures that harm without helping, no treatments known to be worse than alternatives, and no random assignment of treatments.

The problem is that this would shut down almost all medical research. Phase 1 safety studies in healthy volunteers? Gone, because they offer no medical benefit. Randomized trials comparing two treatments? Gone, because doctors don’t flip coins to decide what’s best for you. Even routine blood draws for research purposes would be questionable, since they carry a tiny risk with no upside for you.

Some philosophers say this is exactly right. Hans Jonas, writing in 1969, argued that medical progress is “gratuitous”—nice to have, but not something we should risk harming people to achieve. He thought our descendants have a right to an “unplundered planet” but not necessarily to miracle cures. This sounds harsh, but think about it: would it be worth running experiments on unwilling people just to cure arthritis? Probably not. But what about curing Alzheimer’s or pediatric cancer? Now it gets harder.

Other philosophers argue that research is simply different from medical care, and it’s a mistake to apply the same rules. A researcher isn’t your doctor, they say. The researcher-patient relationship is different from the doctor-patient relationship. If you sign up for a study knowing the risks and wanting to help, that changes things.

The “Just Ask Permission” Approach

This brings us to the simplest solution: just get people’s permission. If someone understands the risks and agrees to participate, what’s the problem? This is called the libertarian view, and it sounds appealing.

But almost nobody in research ethics actually takes this position, and here’s why.

First, there are people who can’t give meaningful consent: children, people with dementia, people in emergency situations. If “just ask permission” were the rule, we could never test treatments on kids or on unconscious accident victims. Yet seventy percent of medications given to children have never been tested in children—meaning kids are getting drugs based on guesswork. That causes harm too.

Second, studies show that even when adults “consent,” they often don’t understand what they’re agreeing to. In randomized trials, many participants don’t realize that their treatment is being chosen by chance rather than by a doctor who knows what’s best for them. They think they’re getting medical care when they’re actually getting research. If people don’t understand, can they really consent?

Third, some things shouldn’t be done to people even if they say yes. Imagine a study where researchers physically and emotionally abuse volunteers for a week, then study ways to help abuse victims cope. Participants might fully understand and agree. But most people think this study is unethical regardless. The problem isn’t just what happens to participants—it’s what researchers are doing to them. There are some things we shouldn’t do to other people, even with permission.

The “It’s Like Any Other Job” Approach

Recently, some philosophers have argued that medical research isn’t special. We let people take risky jobs all the time: construction workers, firefighters, factory workers. They expose themselves to danger for the benefit of others (the customers or clients) and get paid for it. Why should research be different?

If research participants are just doing a job, then the ethical questions become familiar ones: Are they paid fairly? Are the working conditions safe enough? Are they being exploited?

This is called the “research exceptionalism” debate. Traditional research ethics treats medical studies as a special activity requiring special rules: independent review, detailed written consent forms, limits on what can be done even to consenting adults. But maybe that’s overkill. Maybe we should treat research participants more like workers and less like vulnerable subjects.

Critics point out that research is different in important ways. Factory workers usually know what they’re getting into and can quit. Research participants often don’t understand the study, and quitting might mean losing access to treatment. Also, the benefits of research go to distant future people, not to customers buying a product.

Should Everyone Be Required to Participate?

This is where things get really interesting. Some philosophers argue that if you’ve benefited from past research—and if you’ve ever taken medicine, you have—you might have an obligation to participate in future studies. It’s a kind of fairness: you owe something to the system that helped you.

But this runs into problems. First, if I benefit from research done before I was born, I can’t repay those people. Participating in today’s studies helps future people, not past ones. Second, if someone does you a favor without being asked, are you obligated to return it? Your neighbor shoveling your driveway without asking doesn’t mean you owe them a favor—at least not in the way that a contract would.

A different approach imagines what people would agree to if they didn’t know which generation they belonged to. Imagine designing rules for medical research from behind a “veil of ignorance” where you don’t know whether you’re in the past, present, or future. You might agree to a system where everyone takes a turn being a research participant, because the benefits outweigh the costs over time. This would justify research even with children—since in this imaginary bargain, you might end up being a child who needs research to be done.

But critics say this approach proves too much. It might justify almost any study if the benefits are big enough, including ones we think are clearly wrong. Imagine a study that intentionally infects a few people with HIV to find a cure for AIDS. The benefits would be enormous—saving millions of lives—but most people think infecting someone on purpose is unethical, even if the math works out.

The “Low Enough Risk” Approach

Current regulations take a different tack. For people who can’t consent (children, etc.), they say research is okay only if the risks are “minimal.” But what counts as minimal?

Some say minimal means “no chance of serious harm.” But that would rule out even a single blood draw, which carries a tiny risk of infection or fainting. Others say minimal means “no riskier than routine medical exams” or “no riskier than daily life.” But daily life is full of risks—crossing the street, riding in a car, eating food that might be contaminated. If we accept those risks for personal benefit, should we accept the same risks for the benefit of others?

Here’s a disturbing thought. If “minimal risk” means “the risks of daily life,” and we do enough minimal-risk studies, eventually some participants will die or be permanently disabled. The risks are low but real. The question is whether this is acceptable when participants get no medical benefit from taking those risks.

Some philosophers argue that the “risks of daily life” standard is confused. We accept risks in daily life because the activities benefit us (going to school, playing sports, visiting friends). But research participation doesn’t benefit the participant medically. Also, research risks are often additive rather than substitutive. If a study requires you to drive to the clinic, you don’t necessarily skip another car trip—you just add this one. So you’re doubling your driving risk, not replacing it.

Do Participants Have to Share the Study’s Goals?

Hans Jonas had a different proposal. He thought research was ethical only when participants “appropriated the research purpose into their own scheme of ends”—when they genuinely shared the goals of the study. A cancer patient might share the goal of finding a cancer cure. But a healthy volunteer injecting an experimental drug just to see if it’s safe? Jonas thought this was harder to justify.

This gets at something important. Research doesn’t just risk harming people physically. It risks treating them as tools, as means to an end. Jonas worried that even with consent, researchers might be treating participants as if their own goals didn’t matter.

But what does it mean to “share” a goal? Do I have to have the disease being studied? Could I share the goal because my grandmother died from it? What if I just think medical progress is important in general? Once you start stretching the idea, it becomes less clear what Jonas’s requirement actually rules out.

The Bigger Picture: What About Money?

Most medical research today isn’t done by government agencies or universities—it’s done by pharmaceutical companies trying to make money. This changes the ethical calculus in important ways.

Companies sometimes develop “me-too” drugs that are essentially identical to existing treatments, just to grab market share. These studies expose participants to risks without any real benefit to society. The profit motive can also bias how studies are designed, conducted, and reported.

If a company makes billions of dollars from a drug developed using human participants, is it fair that those participants typically receive little or no compensation? Some argue that participants are being exploited. Others worry that paying large sums would create “undue inducement”—tempting people to take risks they wouldn’t normally accept.

The Latest Thinking: Back to Blending Treatment and Research

For decades, the trend was to separate research from clinical care more and more strictly. Research got its own regulations, its own review boards, its own consent forms. The goal was protection.

But this separation has costs. It makes research expensive and slow. It means studies are done on carefully selected volunteers who aren’t like real patients. And it creates a “free rider” problem: everyone benefits from medical progress, but only a small number of people accept the risks of making it happen.

Some commentators now argue for “learning health care systems” where research is woven back into everyday medical care. Every patient’s data and experience would contribute to improving treatment for everyone. The idea is that if everyone participates as both benefiter and contributor, the system becomes fairer and more efficient.

The challenge is whether we can do this without bringing back the abuses of the past. How much should patients be told? Can they opt out? What happens when learning requires randomizing patients to different treatments without their explicit consent? The debate is very much alive.

Where We Are Now

The ethics of medical research remains genuinely unsettled. Everyone agrees that exposing people to risks for the benefit of others requires justification. Nobody agrees on exactly what that justification is.

The stakes are enormous. Every medicine you’ve ever taken exists because someone agreed to be experimented on. Future treatments depend on someone agreeing today. And countless people have been harmed or even killed by unethical research.

Here’s the question that still doesn’t have a clean answer: When is it okay to use one person to help others, and who gets to decide?


Key Terms

TermWhat it means in this debate
Net riskThe chance of harm that isn’t balanced by a chance of personal benefit
Clinical equipoiseWhen experts genuinely don’t know which treatment is better, so random assignment doesn’t disadvantage anyone
Minimal riskA standard for how much risk is acceptable in research with people who can’t consent
Research exceptionalismThe idea that medical research is special and needs its own strict rules, different from normal activities
Undue inducementWhen payment or other benefits are so attractive they make people ignore serious risks
Informed consentWhen someone agrees to participate after understanding what will happen and what the risks are

Key People

  • James Lind (1716–1794): A British ship’s surgeon who conducted one of the first controlled medical trials, treating scurvy with citrus fruits while giving other sailors ineffective treatments.
  • Hans Jonas (1903–1993): A German-born philosopher who argued that medical progress is optional and that research participants must genuinely share the goals of the study.
  • John Stuart Mill (1806–1873): British philosopher whose “libertarian” view—that people should be free to do what they want with consenting others—influences the “just ask permission” approach to research ethics.

Things to Think About

  1. If you could design a study that would definitely cure a terrible disease but would require exposing ten people to a high risk of death, would that be ethical? What if it were one person? What if it were a thousand?

  2. Should children be allowed to participate in medical research? They can’t meaningfully consent, but without research on children, we’re giving them drugs tested only on adults. Which is worse: doing research without consent, or giving untested treatments?

  3. Is it fair that research participants rarely share in the profits when a company makes billions from a drug they helped test? Would paying large sums to participants make things better or worse?

  4. The “risks of daily life” standard says research is okay if it’s no riskier than what people normally face. But is crossing the street really the right comparison? Should we accept the same risks for research as we do for getting to school?

Where This Shows Up

  • Medical consent forms: The pages you’d sign before participating in a study are the direct result of these ethical debates.
  • Vaccine development: Every new vaccine was tested on thousands of volunteers through this same process of risk and benefit.
  • Your own medical care: When your doctor prescribes a drug, their confidence in it comes from clinical trials that faced all these ethical questions.
  • News stories: Reports about “controversial studies” or “unethical experiments” usually involve disagreements over exactly the issues in this article.
  • School science fairs: Even if you’re just surveying classmates, you’re doing research with human subjects, and similar ethical questions apply (though with much lower stakes).