What Makes Something an Action?
Suppose you’re sitting at your desk, and your arm goes up. Maybe you raised it—maybe you wanted to ask a question. Or maybe someone sneaked up behind you and tickled you, and your arm just jerked upward. In both cases, your arm went up. But there’s a huge difference between the two. In the first case, you did something. In the second, something just happened to you.
This is the central puzzle in the philosophy of action. Philosophers want to know: what’s the difference between an arm going up and raising your arm? What extra ingredient turns a bodily movement into something you do—an action you can be responsible for, proud of, or blamed for?
This might sound like a simple question. But as you’ll see, philosophers have been arguing about it for a very long time, and they still don’t agree.
The Standard Story: You Act Because You Want Something
The most popular answer is that an action is a bodily movement caused by the right kind of mental states—typically, a desire and a belief. This view is sometimes called “causalism,” and it was developed in a particularly influential way by the philosopher Donald Davidson.
Here’s the basic idea. Suppose you go to the barber for a haircut. According to Davidson, what explains your action is a “primary reason”: a pair consisting of a desire (you want a haircut) and a belief (you believe going to the barber will get you a haircut). This belief-desire pair doesn’t just make sense of your action; it causes your action. You go to the barber because you want a haircut and believe this is how to get one.
That seems pretty straightforward. But the real puzzle emerges when you think about the fact that you could have the same desire and belief and not act on them. You might want a haircut and know where to get one, but still decide not to go. So what makes the difference between merely having a reason and actually acting on it?
Most causalists say the answer is that in the case of action, the desire and belief actually cause the bodily movement. The intention to act is what connects your reasons to your body’s movements. In the standard story, your desire for an end and your belief about the means jointly cause an intention, which then causes the corresponding movements of your body.
This sounds neat, but there’s a famous problem.
The Problem of Causal Deviance (or: When Causes Go Wrong)
Davidson himself noticed a troubling kind of counterexample to his own theory. Imagine a climber holding another man on a rope. The climber wants to be rid of the weight and danger of holding the other man, and he knows that if he loosens his hold on the rope, he’ll be rid of the weight and danger. This thought—the combination of desire and belief—makes him so nervous that his hands shake and he accidentally lets go. The climber’s desire and belief caused him to release the rope. But did he do it intentionally? No. He didn’t choose to let go. It was an accident, even though it was caused by exactly the kind of mental states that are supposed to produce intentional actions.
This is called the problem of “causal deviance.” The causal chain from your mental states to your bodily movements has to be the right kind of causal chain—not a weird, accidental one. But no one has managed to give a complete, satisfying account of what makes a causal chain the “right kind.” Philosophers have offered various solutions: maybe the intention has to be a “proximate cause” of the movement; maybe the action has to be guided by feedback from the environment; maybe the intention has to continue guiding the action as it unfolds, not just kick it off. But none of these solutions is universally accepted.
Some philosophers think this problem shows that the whole causal approach is doomed. Others think the problem just needs more work. But everyone agrees: it’s a genuine puzzle.
Beyond the Instant: How We Act Over Time
Most of Davidson’s examples are very short actions: flicking a switch, buttering toast. Things you do in a moment and then they’re done. But a lot of what we do takes place over much longer periods. You might be writing a report that takes hours, or training for a sport that takes months, or pursuing a friendship that takes years. How does the philosophy of action handle these extended activities?
Michael Bratman developed a powerful theory to address this. On his view, intentions have two important “faces.” One face is the connection to immediate action (the kind Davidson focused on). The other face is their role in planning and coordinating over time. When you form a future-directed intention—“I’m going to visit my grandmother next weekend”—that intention has what Bratman calls a settling function: it closes off deliberation and decides the question. And it has a coordinating function: it allows you to make plans (buy a gift, arrange a ride) that depend on the intention being stable.
Bratman argues that our capacity to form and maintain these future-directed intentions is essential to being the kind of limited, planning creatures we are. Without them, we’d be stuck deliberating forever, unable to coordinate our actions across time or with other people.
Some philosophers, however, think Bratman draws too sharp a line between intentions for the future and actions in the present. They argue that agency is essentially extended through time—that there’s no deep metaphysical break between deciding to make an omelet, preparing the ingredients, and actually cooking it. These are all parts of a single unfolding process.
Practical Knowledge: Knowing What You’re Doing Without Looking
Now we come to a very different approach to the question “What is an action?” This approach, developed by the philosopher Elizabeth Anscombe, focuses not on causes but on knowledge.
Anscombe noticed something strange about intentional actions. When you’re doing something intentionally, you typically know what you’re doing without having to observe yourself. If someone asks what you’re writing, you can answer without looking at the page. If you’re walking to the store, you know you’re walking to the store without needing to check your own behavior. This seems different from how you know what other people are doing. To know what your friend is writing, you’d have to look.
Anscombe called this “practical knowledge,” and she argued that it’s the key to understanding intentional action. On her view, an intentional action is an event that manifests practical knowledge—knowledge that is “the cause of what it understands.” The agent’s grasp of what she’s doing determines what the action is. If you don’t know you’re pumping water (you think you’re just moving your arms for exercise), then you’re not intentionally pumping water, even if water comes out.
This leads to a controversial claim: to act intentionally, you must know what you’re doing. Many philosophers find this claim too strong. They point to cases like Davidson’s “carbon copier”: someone writing heavily on paper, intending to produce ten legible copies. He doesn’t know he’s succeeding (he might be failing), but if he is making ten copies, is he doing it intentionally? Many people think yes, even though he doesn’t know it.
Defenders of the knowledge condition have responses to this, but the debate is very much alive. Even philosophers who reject the strict knowledge condition often agree that there’s something special about the way agents know their own actions—even if it’s just belief, or normal knowledge, or knowledge that applies only to some descriptions of the action.
What Kind of Thing Is an Action?
So far we’ve been assuming that actions are events—things that happen at a particular time and place. But some philosophers think this is wrong. They argue that actions are better understood as processes.
What’s the difference? An event is a completed thing. “Donald buttered the toast” refers to a finished action with a definite beginning and end. But a process is ongoing, unfolding. “Donald is buttering the toast” refers to something that’s happening right now, which could speed up or slow down, be interrupted, or never reach completion. Processes don’t have a fixed temporal location the way events do. The same process of buttering could take two minutes or ten, depending on what happens.
Why does this matter? Because many of our actions seem to have the features of processes, not events. Your action can speed up, slow down, change in quality, or be interrupted. And when we explain actions—“I’m moving my hands because I’m pumping water because I’m poisoning the inhabitants”—we use the language of ongoing processes, not completed events.
There’s ongoing debate about whether actions are events, processes, or something else entirely. But this isn’t just a technical quibble. How you answer this question affects how you understand the agent’s role in her actions, how you individuate one action from another, and how you make sense of the fact that a single action can be described in many different ways.
Can Animals Act?
If intentional action requires practical knowledge, or the capacity to act for reasons, or the ability to form complex plans, then what about non-human animals? Can a cat stalking a bird be said to act? What about a dog following you around because she wants a cookie?
Some philosophers, following Davidson, have argued that animals cannot act intentionally because they lack certain capacities—like the concept of belief, or the ability to reason about means and ends in a full sense. On this view, animal behavior is just behavior, not action.
But this view has become increasingly unpopular. Many philosophers argue that animals do act, even if their agency is simpler than ours. They point out that we constantly describe animals in action terms (“the squirrel is digging for nuts”), and that denying animal agency would make it hard to explain how human agency evolved or how children develop their capacities for action.
The current consensus is more or less that animals are agents, but that there may be important differences between animal agency and full-blown human intentional action. The debate now focuses on exactly what capacities are required for different kinds of agency, and which animals have which capacities.
Something to Think About
The philosophy of action might seem abstract, but it touches on questions that matter in everyday life. When is someone responsible for what they do? What does it mean to say someone “could have done otherwise”? Are you the author of your actions, or just a passenger on a train of causes and effects? These are the kinds of questions that make the philosophy of action a living, unsettled field—one where the most interesting answers are still being worked out.
Appendices
Key Terms
| Term | What it does in the debate |
|---|---|
| Primary reason | A desire-belief pair that both makes sense of an action and (on the causalist view) causes it |
| Causal deviance | The problem that a mental state might cause a bodily movement through a weird path, producing something that isn’t really an action |
| Practical knowledge | The special way an agent knows what she’s doing without needing to observe herself |
| Future-directed intention | An intention to do something later, which helps us plan and coordinate our actions over time |
| Process | An ongoing, unfolding activity (like “buttering”) as opposed to a completed event (like “has buttered”) |
Key People
- Donald Davidson (1917–2003) — A hugely influential American philosopher who argued that actions are caused by beliefs and desires, and that explaining an action is giving its cause.
- Elizabeth Anscombe (1919–2001) — A British philosopher who argued that intentional action is essentially connected to practical knowledge, and that the question “Why?” reveals the structure of action.
- Michael Bratman (born 1945) — An American philosopher who developed the “planning theory” of agency, focusing on how intentions help us coordinate our actions over time.
Things to Think About
-
If causal deviance is a problem for the view that actions are caused by mental states, does the same problem arise for a computer? If a program causes a robot to move through a chain of code, can that movement ever be an “action”? Is there a difference between a bug in the code and a deviant causal chain in a human?
-
The knowledge condition says you must know what you’re doing to act intentionally. But what about habits? If you’re brushing your teeth while thinking about something else, do you know you’re brushing your teeth? Are you doing it intentionally?
-
If animals can act, does that mean they can be responsible for their actions? Could a dog be blamed for stealing food? Could a cat be praised for catching a mouse? Where does the line between action and mere behavior draw the line between responsibility and non-responsibility?
Where This Shows Up
- Law: Courts have to decide whether someone acted “intentionally” or “accidentally,” which is exactly the distinction the philosophy of action tries to clarify.
- Neuroscience: Experiments by Benjamin Libet and others have been interpreted as showing that our brains “decide” to act before we’re consciously aware of deciding—raising questions about whether we’re really the authors of our actions.
- Artificial intelligence: When does a robot “do” something vs. just “happen” to move? Engineers and philosophers debate whether AI systems can have intentions or act for reasons.
- Everyday life: When a friend says “I didn’t mean to hurt your feelings,” they’re making a claim about whether their action was intentional—a claim that matters for apologies, forgiveness, and relationships.