Philosophy for Kids

What Is Analysis? The Art of Untangling Ideas

Imagine you’re holding a knotted necklace. You can’t wear it until you find where the chain loops around itself and gently pull the strands apart. Now imagine that instead of a necklace, you’re holding an idea—a complicated one like “justice” or “knowledge” or “fairness”—and it’s tangled up in your mind. You need to untangle it to see what it’s really made of.

That act of untangling is what philosophers call analysis. The word comes from an ancient Greek term that literally means “loosening up” or “breaking apart.” In Homer’s Odyssey, Penelope uses the word when she secretly unravels at night the shroud she’s been weaving during the day. She’s analysing it—unpicking the threads. Ever since, philosophers have been borrowing that metaphor: trying to unpick ideas to see how they’re put together.

But here’s the strange thing. When philosophers actually try to do this—when they try to “analyse” something—they quickly discover that there’s no single way to do it. Different philosophers, in different times and places, have meant very different things by “analysis.” The history of philosophy is really the story of a web of methods for untangling, each one pulling at a different thread.


Four Ways to Untangle

Let’s say you want to analyse the concept “justice.” What would you actually do?

You might try to break it down into parts. Justice, you might decide, involves fairness, equality, and giving people what they deserve. You’re treating the concept like a Lego model: take it apart, see what pieces it’s made of. This is called decompositional analysis—breaking something into its components. It’s what most people today think of when they hear the word “analysis.”

But there’s another possibility. Instead of breaking justice into pieces, you might try to trace it back to something more basic. You might ask: “What principles does the idea of justice depend on? What would I need to prove before I could prove that something is just?” This is called regressive analysis—working backward from what you’re trying to understand to the foundations it rests on. Ancient Greek mathematicians did this all the time. To prove a complicated theorem, they’d start by assuming it was true, then work backward step by step until they reached something they already knew, and then reverse the whole process to build the proof properly.

Here’s a third possibility. Maybe the problem isn’t that justice is complicated—maybe the problem is that we’re saying it wrong. Maybe the way we talk about justice is secretly confusing, and we need to translate it into clearer language before we can even start. This is interpretive analysis—transforming what you’re analysing into a different, more precise form. It’s like when your math teacher tells you to “translate the word problem into an equation” before solving it.

And there’s a fourth way. Instead of breaking justice apart or tracing it back or translating it, you might try to see how it connects to everything else. You might ask: “How does justice relate to law? To morality? To happiness? To power?” The idea here is that nothing can be understood in isolation—you only really understand something when you see the whole web of connections it’s part of. This is connective analysis.

Here’s what’s interesting: these four methods aren’t rivals. Real philosophical analysis usually involves all of them at once. You translate a confusing statement into clearer language, then you break it into parts, then you trace those parts back to even more basic principles, and all the while you’re noticing how everything connects. The art is knowing which thread to pull when.


A Revolutionary Idea: Translating into Logic

Around the year 1900, something happened that changed philosophy forever. Two thinkers—Gottlob Frege, a German mathematician, and Bertrand Russell, a British philosopher—developed a brand-new system of logic. It was more powerful than any logic that had existed before. And they realized that if you translated ordinary statements into this new logical language, something remarkable happened: philosophical problems that had seemed impossible just dissolved.

Here’s a classic example. Consider the statement: “Unicorns do not exist.” If you take this sentence at face value, it seems to say that there’s a thing called “unicorns” that has the property of “not existing.” But that’s weird. How can a thing have the property of not existing? If it doesn’t exist, what are we even talking about? The medieval philosopher Meinong famously said that unicorns must “subsist” even if they don’t exist—otherwise the sentence makes no sense. That led to all sorts of strange theories about a shadowy realm of non-existent objects.

Frege and Russell offered a different approach. Instead of taking the sentence at face value, they translated it into their new logical language. In that language, “Unicorns do not exist” becomes something like: “The concept ‘unicorn’ has no instances.” There’s no mysterious non-existent object involved. You’re just saying that nothing in the world happens to match the description “unicorn.” The problem was never really about unicorns—it was about being tricked by the grammar of ordinary language into thinking that every noun must name something real.

This was the birth of what became known as analytic philosophy. And the method wasn’t just breaking ideas into parts, the way Kant had thought analysis worked. It was more like what Descartes did when he invented analytic geometry: he translated geometrical problems into algebra, solved them there, then translated the solution back. Frege and Russell were doing philosophical translation: taking confusing statements, rendering them into the precise language of logic, and watching the confusion disappear.


When Language Leads You Astray

A philosopher named Gilbert Ryle pushed this idea further. In a famous paper from 1932, he pointed out that ordinary language is full of what he called “systematically misleading expressions.” These are sentences that look like they’re about one kind of thing but are actually about something else entirely.

Consider: “Unpunctuality is reprehensible.” This sounds like it’s talking about a thing called “unpunctuality” that deserves blame. But that can’t be right—you can’t blame a concept. The real meaning is something like: “Whoever is unpunctual should be reproved for it.” There’s no mysterious entity involved.

Or consider: “Jones hates the thought of going to hospital.” This makes it sound like there’s a thing—“the thought”—that Jones hates. But there isn’t. Jones just feels distressed when he thinks about what will happen at the hospital.

The point is that language can trick us into believing in things that don’t exist. Philosophical analysis, on this view, is the art of detecting the trick and rephrasing the sentence so the trick disappears. You don’t need to argue about whether “unpunctuality” is real or what kind of thing “the thought” is. You just rewrite the sentence in a way that doesn’t suggest those things exist in the first place.


The Paradox of Analysis

But here’s a puzzle that has bothered philosophers for a long time, and it gets to the heart of what analysis is supposed to do.

If you analyse a concept correctly—say, you define “knowledge” as “justified true belief”—two things seem to be true. First, your analysis has to be correct: the definition has to capture exactly what knowledge is. But second, your analysis has to be informative: it has to tell you something you didn’t already know.

The problem is that these two requirements seem to clash. If the analysis is correct, then the concept being analysed and the analysis itself must mean the same thing. But if they mean the same thing, then the analysis is just saying the same thing twice—it’s not telling you anything new. How can analysis be both true and informative?

This is called the paradox of analysis, and philosophers still argue about it. It’s connected to a puzzle from Plato’s dialogue Meno, where Socrates asks: how can you search for knowledge of something when you don’t already know what it is? If you know what you’re looking for, you don’t need to search. If you don’t know, you won’t recognize it when you find it.

Some philosophers think this shows that analysis can’t really give us new knowledge—it can only make explicit what we already implicitly know. Others think it means we should give up on the idea of perfect, complete analyses and aim for something more like clarification or illumination. Nobody really agrees.


A Global Web of Analysis

One thing that’s easy to miss is that analysis isn’t just a Western thing. In India, philosophers in the Nyāya school developed incredibly sophisticated analytic techniques starting around 2000 years ago. Their methods were based not on geometry (as in Greece) but on Sanskrit grammar. They analysed the structure of inferences, the nature of perception, and the conditions under which knowledge claims are valid. By the 14th century, they had developed what scholars now call the “new Nyāya” school—Navya-Nyāya—which rivals anything in Western analytic philosophy for technical precision.

In China, Buddhist philosophers like Fazang (7th century) used the metaphor of a golden lion statue to explain how a whole and its parts relate to each other. They argued that understanding anything requires understanding its connections to everything else—a version of what we called connective analysis. Later, Confucian philosophers like Dai Zhen (18th century) developed what they called the “principle of analysis” (fēnlǐ), which involved carefully dividing things into their components to understand the larger pattern.

What this shows is that the urge to untangle—to take ideas apart and see how they work—isn’t a peculiarity of European philosophy. It’s something humans do whenever they think seriously about thinking.


So What Is Analysis, Really?

After all this, you might be hoping for a neat definition. There isn’t one. Analysis is more like a family of related activities: breaking things down, tracing them back, translating them into clearer language, and connecting them to other things. Different philosophers pull on different threads.

But one thing seems clear. Analysis isn’t just destroying something by taking it apart. When Penelope unravelled her shroud each night, she wasn’t destroying it—she was maintaining her freedom and buying time. When a chemist analyses a compound, they don’t throw away the pieces—they learn what the compound is made of. When a philosopher analyses a concept, they’re not trying to get rid of it. They’re trying to understand it better, so they can think with it more clearly.

The point of analysis is not to reduce everything to its simplest bits and call it a day. It’s to see the structure clearly enough that you know how to put it back together.


Key Terms

TermWhat it does in this debate
AnalysisThe general activity of trying to understand something by untangling, breaking down, tracing back, translating, or connecting it
Decompositional analysisBreaking a concept or thing into its component parts
Regressive analysisWorking backward from what you want to understand to the more basic principles it depends on
Interpretive analysisTranslating a statement into a clearer or more precise language to reveal its real meaning
Connective analysisUnderstanding something by seeing how it relates to other things in a larger network
Logical analysisTranslating ordinary statements into a formal logical system to reveal their true structure
Paradox of analysisThe puzzle that if an analysis is correct it seems to be trivial, but if it’s informative it seems not to be correct
Systematically misleading expressionA sentence whose grammatical form tricks you into thinking it’s about one kind of thing when it’s really about another

Key People

  • Gottlob Frege (1848–1925): German mathematician and philosopher who invented modern predicate logic and used it to analyse number statements, showing that “exists” isn’t a property of things but of concepts.
  • Bertrand Russell (1872–1970): British philosopher who developed the theory of descriptions, which showed how to analyse statements about things that don’t exist without having to claim they somehow do.
  • Gilbert Ryle (1900–1976): British philosopher who argued that many philosophical problems come from being misled by ordinary language, and that analysis involves rephrasing those misleading statements.
  • Fazang (643–712): Chinese Buddhist philosopher who used the metaphor of a golden lion to explain how wholes and parts are interdependent, developing a connective form of analysis.
  • Dai Zhen (1724–1777): Chinese Confucian philosopher who developed a “principle of analysis” that involved dividing things into parts to understand the larger pattern they form.

Things to Think About

  1. Think of a concept you’ve argued about with friends—like “fairness” or “friendship.” If you tried to analyse it, which method would you use: breaking it into parts, tracing it back to principles, translating it into clearer language, or connecting it to other ideas? Does one method seem more promising than the others?

  2. The paradox of analysis says that if an analysis is correct it seems trivial, and if it’s informative it seems wrong. Can you think of an example of something that felt like genuine discovery even though it was just making explicit what you already knew? Does that count as “new knowledge”?

  3. When Ryle says “Unpunctuality is reprehensible” is misleading, he’s claiming that ordinary language tricks us. But who decides what the “real” meaning is? Couldn’t someone argue that “unpunctuality” is a perfectly fine way to talk? How would you settle the disagreement?

  4. The Indian and Chinese traditions developed analytic methods independently of the Greek-European tradition. Does that suggest that analysis is something natural to human thinking—or that it’s a specific technique that happened to be invented in multiple places? What would count as evidence either way?


Where This Shows Up

  • Computer science: When programmers “parse” a sentence or “decompose” a problem into functions, they’re doing analysis. The logic invented by Frege and Russell is the basis for how programming languages and databases are designed.
  • Science: Chemists analyse compounds into elements. Biologists analyse organisms into cells. Physicists analyse forces into fundamental interactions. The word “analysis” in science still carries the basic meaning of “breaking down to understand.”
  • Law: When lawyers argue about what a law “really means,” they’re doing interpretive analysis—translating ordinary language into precise legal terms. Judges analyse concepts like “intent” or “negligence” by breaking them into conditions.
  • Everyday arguments: When someone says “That’s not fair!” and you ask “What do you mean by fair?” you’re doing conceptual analysis. You’re trying to get clear on what the claim actually amounts to before you argue about whether it’s true.