Philosophy for Kids

Who’s to Blame When a Computer Makes a Mistake?

Imagine you’re riding in a driverless car. It’s driving itself down a street when a dog runs out into the road. The car has to decide: swerve and risk hitting a tree, or keep going and hit the dog. A split second later, the car swerves. You’re okay. The dog is okay. But you didn’t make that decision. The car did.

Now imagine the car didn’t swerve. The dog is hurt. Who’s responsible? The person who programmed the car? The company that sold it? The person sitting in the passenger seat? The car itself?

This is not just a puzzle about future technology. It’s a puzzle about now. Every day, computer systems make decisions that affect people’s lives — deciding who gets a loan, who gets parole, what news you see, whether an airplane stays in the air. And when something goes wrong, it’s often surprisingly hard to figure out who to blame, or even whether blaming anyone makes sense.

Philosophers who study moral responsibility — the conditions under which it makes sense to praise or blame someone for what they do — have noticed that computers mess up our normal way of thinking about responsibility. This article is about why that happens and what philosophers are trying to do about it.


The Three Things You Need to Be Responsible

Most philosophers agree that before you can hold someone morally responsible for something, three things usually have to be true.

First, there has to be a causal connection between the person and what happened. If you couldn’t have done anything to prevent it, it’s not fair to blame you. Second, the person has to have known or been able to know what might happen. If you genuinely couldn’t have known your action would cause harm, we usually let you off the hook. Third, the person has to have acted freely — not been forced or manipulated into doing what they did.

Computers make trouble for all three conditions.


The Problem of Many Hands

Start with the causal connection. Suppose an airplane crashes, and the investigation finds that the crash resulted from a software bug. Who caused it? The programmer who wrote that line of code? Their manager who approved the schedule? The tester who missed the bug? The company that pressured everyone to release the software quickly? The regulator who didn’t check carefully enough? The pilot who trusted the system?

This is what philosophers call the problem of many hands. In complex technological systems, lots of people contribute to the final outcome, and it’s often impossible to say any one person caused it. Each person’s contribution looks small and harmless by itself. It’s only when you add them all up that disaster happens.

One famous example is the Therac-25, a radiation therapy machine used in the 1980s. It was supposed to treat cancer patients with precisely controlled radiation beams. But due to a combination of software bugs, bad interface design, inadequate testing, and poor follow-up on earlier incidents, the machine delivered massive overdoses to six patients. Three of them died. The investigators concluded that no single person was to blame. The disaster was the result of many small failures that happened to line up.

A more recent example is the crashes of two Boeing 737 MAX airplanes in 2018 and 2019. Investigations found multiple contributing factors: a faulty automated system, insufficient pilot training, pressure to compete with other manufacturers, and regulatory failures. Again, no single person was responsible.

Even when we can trace the causal chain, computers create distance between people and the effects of their actions. A programmer writes code that will affect people she will never meet, years later, in situations she could not possibly have imagined. A drone pilot flies missions from a base halfway around the world, never seeing the people affected by their decisions. This distance can make it harder to feel responsible — and harder to know what you’re actually responsible for.


When You Don’t Know What You’re Doing

The second condition — knowing what might happen — also gets complicated with computers. In many cases, the people using computer systems don’t fully understand how they work.

Consider the risk-assessment tools used by judges in some U.S. states to help decide whether to release someone on parole. In 2016, an investigation found that one of these tools produced scores that seemed racially biased — it was more likely to wrongly predict that Black defendants would commit future crimes. But here’s the thing: the judges using the tool didn’t know how the algorithm calculated its scores. The algorithm was proprietary, meaning the company kept the details secret. So judges were making decisions based on something they couldn’t evaluate.

This is opacity. Many computer systems, especially those using machine learning, are essentially black boxes: you put data in, you get results out, but you can’t see what happens in between. That makes it hard for anyone — users, designers, regulators — to know what the system will do in advance.

There’s also a psychological effect called automation bias. People tend to trust computers too much or too little. In 1988, the U.S.S. Vincennes, a warship equipped with a highly sophisticated radar system, shot down an Iranian civilian airliner, killing all 290 people on board. The radar system identified the airliner as a military aircraft. Two other ships nearby had correctly identified it as civilian, but they didn’t question the Vincennes’s identification. Why? Because they assumed the Vincennes’s fancy system must know better.

When people rely too heavily on computers, they stop using their own judgment. When they distrust computers too much, they ignore warnings. Neither is good for making responsible decisions.


Are You Free If You’re Being Nudged?

The third condition — freedom to act — raises perhaps the deepest questions. Computers don’t just help us make decisions; they shape the choices we have in the first place.

Some computer systems are designed to limit human freedom. An alcohol lock in a car forces the driver to pass a breath test before starting the engine. A speeding camera automatically issues tickets regardless of context or personal circumstances. These systems take discretion away from human beings. They may be good for safety, but they also reduce your ability to make your own choices.

Other systems are more subtle. Social media platforms use algorithms to decide what you see. These algorithms are designed to keep you engaged, but they also shape what you think about, what you believe, and what you want. Philosophers call this nudging — gently steering your behavior without you noticing. When these nudges are powerful and invisible, you have to wonder: how free are your choices really?

One extreme example is dark patterns — interface designs intentionally created to trick you into doing things you didn’t mean to do, like signing up for an expensive subscription you don’t want. If you’re being manipulated, can you really be held responsible for what you choose?


Can Computers Themselves Be Responsible?

Given all these problems, some philosophers have suggested a radical solution: maybe we should start treating computers as moral agents. If a computer makes a decision, maybe the computer should be held responsible, not the humans behind it.

This idea is controversial. Critics point out that computers don’t have feelings, intentions, or a sense of right and wrong. You can’t punish a computer in any meaningful way — you can’t make it feel bad, you can’t teach it a lesson. A computer that does something wrong can be reprogrammed or deleted, but that’s fixing a bug, not holding someone accountable.

Other philosophers take a middle position. They argue that computers are part of the moral picture, even if they aren’t full moral agents. When you act with a computer, the computer shapes what you do. Your action is a blend of human and technology. So maybe responsibility should be understood as distributed across humans and machines — not located in any single agent.

Still others say that worrying about computer moral agents is a distraction. The real problem, they argue, is that we’re too quick to let humans off the hook. If a complex system crashes, we should look harder at the organizational culture, the design choices, the training, and the regulations — not ask whether the computer is to blame.


What Should We Do?

There isn’t a settled answer to these questions. Philosophers are still arguing. But the debate has produced some useful ideas.

One is the distinction between negative responsibility (avoiding blame) and positive responsibility (doing what you ought to do). Some philosophers argue that computer professionals should adopt a positive view: instead of focusing on who can be blamed when something goes wrong, they should focus on their obligation to design systems that minimize harm. The question isn’t “Can I be blamed for this?” but “What should I do to make this better?”

Another idea is meaningful human control. This means designing sociotechnical systems — the combination of humans, technology, and organizations — so that human beings can actually exercise meaningful control over what happens. That might mean building systems that are more transparent, giving human operators better training and authority to override automated decisions, and making sure someone is always responsible for the system’s behavior.

Finally, some philosophers argue that we need a culture of accountability — a social environment where people are expected to answer for the consequences of their work, not look for excuses. This would mean rejecting the idea that software bugs are inevitable, refusing to blame “the computer” as a scapegoat, and not claiming ownership of software while denying responsibility for its performance.


The Big Puzzle

Here’s the strange situation we’re in. Computers are making more and more decisions that affect people’s lives. But our traditional ideas about responsibility were designed for a world where individuals did things with directly visible consequences. The distance, complexity, and opacity that computers introduce make it genuinely unclear who is responsible — or whether the concept even applies in the same way.

Maybe the right response is to create new ways of assigning responsibility. Maybe we need to expand our idea of who or what can be held accountable. Maybe we need to design our systems differently so that responsibility doesn’t get lost in the first place.

What do you think? When a driverless car hurts someone, who’s responsible? The programmer? The company? The passenger? The car? Or is the question itself the problem?


Appendices

Key Terms

TermJob in the Debate
Moral responsibilityThe idea that someone can fairly be praised or blamed for an action and its consequences
Problem of many handsWhen many people contribute to an outcome, it becomes hard to say any one person caused it
Automation biasThe tendency to trust computer systems too much or too little, affecting good judgment
OpacityWhen a system’s inner workings are hidden, making it hard to understand or evaluate its decisions
Meaningful human controlThe idea that human beings should be able to genuinely influence and oversee the systems they use
Culture of accountabilityA social environment where people are expected to answer for their actions rather than look for excuses

Key People

  • Helen Nissenbaum — A philosopher who argued that computer technologies create a systematic erosion of accountability, and that we need a culture of accountability to fix it
  • Deborah Johnson — A philosopher who argued that computers are not moral agents because they lack intentions, but they still matter morally as tools that carry human values
  • Luciano Floridi — A philosopher who suggested treating artificial agents as morally accountable without needing to hold them fully responsible, like we do with animals
  • Andreas Matthias — The philosopher who coined the term “responsibility gap” to describe how complex autonomous systems make it hard to hold anyone responsible

Things to Think About

  1. If a company sells a system that is too complex for any single person to understand, does that complexity let everyone off the hook, or should someone be responsible for creating a system they can’t explain?

  2. Suppose a self-driving car is programmed to always protect its passengers, even if that means harming pedestrians. Who made that moral choice — the programmer, the company, or the algorithm? And who should be blamed if it causes harm?

  3. Is it ever fair to hold someone responsible for something they couldn’t have known would happen? What if they should have tried harder to know?

  4. If we started treating computers as moral agents, would that make it easier or harder to make sure people behave responsibly?

Where This Shows Up

  • Self-driving cars — Companies and regulators are still figuring out who’s liable when an autonomous vehicle causes an accident
  • Social media algorithms — Debates about whether platforms are responsible for the content their algorithms amplify (misinformation, hate speech, radicalization)
  • AI in criminal justice — The use of risk-assessment algorithms in courts raises questions about fairness, transparency, and who is accountable for biased decisions
  • Military drones and autonomous weapons — International debates about whether machines should be allowed to make life-and-death decisions, and who would be responsible if they make mistakes