How to Do Logic with Algebra
Imagine you’re playing a game where you have to figure out whether an argument is valid. You know the kind: “All dogs are mammals. All mammals are animals. Therefore, all dogs are animals.” That one’s easy. But what about an argument with five premises about classes of things that overlap in complicated ways? For centuries, if you wanted to check whether an argument was valid, you had to memorize a big list of valid forms—like memorizing every possible move in chess instead of learning the rules. Then, in 1847, a British mathematician named George Boole had an idea that changed everything: what if you could turn logic into algebra?
This is the story of how a handful of thinkers tried to turn reasoning into a kind of math—and what happened when they tried.
The Big Idea: Logic as Equations
Here’s the basic move Boole made. Instead of thinking about “dogs” and “mammals” as words, think about them as classes—groups of things. The class of all dogs. The class of all mammals. Then the statement “All dogs are mammals” becomes a claim about those classes: the class of dogs is contained inside the class of mammals.
But Boole wanted to go further. He wanted to write that claim as an equation. If you let (x) stand for the class of dogs and (y) stand for the class of mammals, then “All dogs are mammals” becomes (x = xy). What does that mean? It means: the class of dogs is exactly the same as the class of things that are both dogs and mammals. Which makes sense—everything that’s a dog is also a mammal, so the dog-class and the dog-and-mammal-class are the same.
This was a genuinely weird idea at the time. Nobody had thought of treating logical statements as algebraic equations that you could solve. Boole’s method worked like this:
- Translate your logical problem into equations.
- Use algebraic techniques to solve the equations.
- Translate the solution back into ordinary language.
The equations followed rules that looked a lot like ordinary arithmetic, but with a twist. In ordinary arithmetic, (x \times x = x^2). But in Boole’s system, (x \times x = x). (Think about it: the class of dogs intersected with itself is still just the class of dogs.) This one difference—the “idempotent law”—was the key that unlocked everything.
The Problem with Adding Things
But Boole ran into trouble when he tried to add classes. In ordinary algebra, (2 + 3 = 5). But what is “dogs plus cats”? If you mean the union of the two classes—everything that’s either a dog or a cat—you get a perfectly sensible result. But what if the classes overlap? What is “dogs plus mammals”? The mammal class already contains all the dogs, so if you just add them together you’re counting the dogs twice. Boole dealt with this by saying addition was only allowed when the classes were disjoint (had no overlap). Otherwise, the expression was “uninterpretable”—you could write it down and do math with it, but you couldn’t translate it back into a sensible statement about classes.
This bothered a younger logician named William Stanley Jevons. In 1863, Jevons wrote to Boole and said: why not just define addition as the ordinary union of classes, even when they overlap? Then you’d get the rule (x + x = x) instead of (x + x = 2x). Boole hated this idea. It would have destroyed the connection between his logical algebra and ordinary algebra, which he thought was essential. He stopped writing back to Jevons.
Jevons went ahead and published his own system anyway in 1864. He called it “Pure Logic” because he was cutting the cord to ordinary arithmetic. He kept Boole’s idea of using equations, but he made all the operations work for any classes, not just disjoint ones. This turned out to be the right move historically—modern Boolean algebra is much closer to Jevons’s system than to Boole’s original.
A Different Starting Point: Subsumption
Charles Sanders Peirce (say “purse”) took a different approach. Instead of starting with equality as the basic relation, he started with subsumption—the idea that one class is contained in another. The symbol he used looked like a stretched-out “less than” sign: (a \prec b) meant “all a is b.”
This might seem like a small change, but it changed the whole feel of the system. Instead of solving equations, you were figuring out what followed from containment relations. Peirce then defined addition and multiplication as operations that gave you the “least upper bound” and “greatest lower bound” of classes—fancy terms for the smallest class that contains both, and the largest class that’s contained in both.
Peirce also made another quiet but revolutionary change. In traditional logic, “All A is B” only made sense if there actually were As—otherwise the statement was considered meaningless or false. Peirce said: no, “All A is B” is true even if there are no As. (Think about it: “All unicorns are magical” seems like a true statement, even though there are no unicorns.) This meant you could no longer automatically conclude “Some B is A” from “All A is B.” This was a clean break with two thousand years of logical tradition.
Relations and Quantifiers
De Morgan and Peirce both realized that logic wasn’t just about classes of things—it was also about relations between things. “John loves Mary” isn’t about classes; it’s about a relationship. De Morgan started working on a logic of relations in the 1850s, and Peirce picked it up and ran with it.
Peirce introduced quantifiers into his logical algebra using the symbols (\Sigma) (for “there exists”) and (\Pi) (for “for all”). This was a huge advance. With classes alone, you could say things like “All dogs are mammals.” But with relations and quantifiers, you could say things like “Every dog has a tail that belongs to it” or “There is a mammal that all dogs like.” This was the beginning of what we now call first-order logic—the kind of logic that underlies most of modern mathematics and computer science.
The Systematizer: Schröder
Ernst Schröder was a German mathematician who took all these developments and organized them into a massive three-volume work called Lectures on the Algebra of Logic (1890–1905). He was like the person who comes in after the inventors have scattered their ideas everywhere and puts them all into one coherent system.
Schröder’s work had three parts. Volume I dealt with the equational logic of classes—basically, Boole’s approach but cleaned up and made rigorous. Volume II tackled the problem of existential statements—sentences that say something exists. Schröder showed that you couldn’t express “Some X is Y” as an equation; you had to use a negation of an equation, like (XY \neq 0). Volume III dealt with the algebra of binary relations.
Schröder’s treatment of relations was so thorough that it became the foundation for later work. He even tried to solve “relation equations”—given an equation involving relation symbols, find the most general solution for one of them in terms of the others. Peirce thought this was a waste of time, but Schröder’s methods turned out to be precursors to important ideas in model theory.
Axioms and Independence
At the turn of the twentieth century, there was a wave of interest in axiomatizing different areas of mathematics—making the assumptions explicit and checking whether they were really independent of each other. Edward Huntington took on the algebra of logic in 1904, producing several different sets of axioms for what was now being called “Boolean algebra” (a name coined by Henry Sheffer).
Sheffer himself made a fascinating discovery: you could do all of Boolean algebra with a single operation. He called it “joint exclusion” (now known as the Sheffer stroke), written as (a \mid b), meaning “not both a and b.” From this one operation, you could define everything else—negation, addition, multiplication. Whitehead and Russell called this the greatest advance in logic since their own Principia Mathematica. (Others were less impressed.)
One famous problem from this period: the Robbins Conjecture. In 1933, Herbert Robbins noticed that a slightly simpler equation might replace one of Huntington’s axioms. Neither Huntington nor Robbins could prove it, and the problem stumped mathematicians for over sixty years. Finally, in 1996, an automated theorem-proving program called EQP found a proof. A computer had done what generations of human mathematicians couldn’t.
The Stone Age
Marshall Stone was interested in something completely different—rings of linear operators—when he noticed that certain special elements in these rings formed a Boolean algebra. This led him to ask: what does an arbitrary Boolean algebra actually look like? His answer, proved in the 1930s, was stunning: every Boolean algebra is essentially an algebra of sets. No matter how abstract and weird the Boolean algebra, you can always think of it as the collection of all subsets of some space.
Stone also discovered a deep connection between Boolean algebras and certain topological spaces (now called Stone spaces). This connection turned out to be incredibly useful. If you wanted to know something about a Boolean algebra, you could translate the question into a question about a topological space, solve it there, and translate back. This kind of back-and-forth between different mathematical worlds became a standard technique.
Still Alive
The algebra of logic tradition didn’t end in the 1800s. In the 1940s, Alfred Tarski revived the calculus of relations and created new algebraic systems (cylindric algebras, polyadic algebras) designed to capture the full power of first-order logic. These systems raised new questions: could every model of the axioms for relation algebras actually be represented as an algebra of relations on a set? (Partially yes, partially no.) Could you axiomatize the calculus of relations with finitely many equations? (No, proved by Monk in 1964.)
These are still active areas of research. Modern logicians and computer scientists work on things like relation algebras, Boolean algebras with operators, and algebraic approaches to various non-classical logics. The original vision—that logic could be done with algebra—turned out to be remarkably fruitful.
So What Was the Point?
The algebra of logic tradition did something that seems obvious in hindsight but was revolutionary at the time: it showed that reasoning could be calculated. You didn’t need to memorize lists of valid argument forms. You could write down equations, apply rules, and get answers. This is the ancestor of everything from the logic circuits in your phone to the theorem provers that mathematicians use today.
But it also raised deep questions that are still being worked out. How closely should logic be tied to the algebra of numbers? What counts as a “good” notation? Can all of reasoning be reduced to calculation? The people in this story had different answers, and their disagreements pushed the field forward.
Nobody has fully settled these questions. And maybe that’s the most interesting thing about the algebra of logic: it’s still being built.
Appendices
Key Terms
| Term | What it does in this debate |
|---|---|
| Class | A group of things (dogs, mammals, red things) that logic can operate on |
| Intersection | The operation that picks out things belonging to both of two classes |
| Union | The operation that picks out things belonging to either of two classes |
| Idempotent law | The rule that (x \times x = x), which distinguishes logical algebra from ordinary arithmetic |
| Subsumption | The relation of one class being contained in another (all A is B) |
| Quantifier | A symbol that says “for all” or “there exists” |
| Boolean algebra | The fully developed algebraic system for logic, named after Boole |
| Axiomatization | A complete list of the basic assumptions from which everything else follows |
| Model | A concrete example that satisfies a set of axioms |
Key People
- George Boole (1815–1864): A self-taught British mathematician who had the radical idea that logic could be done as algebra. His two books (1847 and 1854) launched the whole tradition.
- William Stanley Jevons (1835–1882): An economist and logician who broke with Boole by making all algebraic operations work for any classes. He simplified and purified the system.
- Charles Sanders Peirce (1839–1914): An American philosopher and logician who introduced subsumption, modern semantics (allowing empty classes), relations, and quantifiers into the algebra of logic. He was brilliant but disorganized.
- Ernst Schröder (1841–1902): A German mathematician who systematically organized the algebra of logic into three massive volumes. His work was the definitive reference for decades.
- Marshall Stone (1903–1989): An American mathematician who proved that every Boolean algebra corresponds to an algebra of sets, and connected Boolean algebras to topology.
- Alfred Tarski (1901–1983): A Polish logician who revived the algebra of relations and created new algebraic systems to match first-order logic. One of the most influential logicians of the twentieth century.
Things to Think About
-
Boole thought the connection between logical algebra and ordinary algebra was essential. Jevons thought it was a mistake. Who do you think was right? Can you think of an example where the connection helps, and another where it gets in the way?
-
Peirce decided that “All A is B” should be true even when there are no As. This seems simple, but it means you can’t conclude “Some B is A” from “All A is B.” Does this match how you actually use language? When you say “All unicorns are magical,” do you feel like you’re also saying “Some magical things are unicorns”?
-
The Sheffer stroke can define all of Boolean algebra from a single operation. Does a simpler set of basic operations always make a system better? What might be lost when you reduce everything to one thing?
-
Stone showed that every Boolean algebra is an algebra of sets, but Tarski discovered that not every relation algebra is an algebra of relations. Why should it matter whether something has a “concrete” representation? What changes when you find out your abstract system actually lives somewhere in the real world?
Where This Shows Up
- Computer chips: The logic gates in every processor are physical implementations of Boolean algebra. AND, OR, and NOT gates do exactly what Boole’s algebra described.
- Search engines: When you search “dogs AND cats” or “dogs NOT cats,” you’re using Boolean operations on classes of web pages.
- Programming: Most programming languages have Boolean variables (true/false) and use Boolean algebra to control what the program does.
- Database queries: SQL and other database languages let you combine conditions with AND, OR, and NOT, directly descended from the algebra of logic.
- Automated theorem proving: The program that solved the Robbins Conjecture is a direct descendant of Boole’s idea that reasoning could be calculated algorithmically.