How to Build a Scientific Community (With Computers)
Imagine you’re the captain of a ship, and your crew has to figure out which route across the ocean is safest. Half of them think the northern passage is better; half favor the southern one. You can’t let them argue forever, but you also don’t want them to all jump on the first idea that sounds good and miss a better one. How do you organize them so they find the truth as quickly and reliably as possible?
This isn’t just a puzzle about sailors. It’s a puzzle about science itself. Scientists working together face the same problem: they need to share information, but sharing too quickly can make everyone follow a wrong lead. They need to explore different ideas, but too much exploration means nobody digs deep enough. How should a scientific community be structured so it actually discovers the truth?
Philosophers have been asking this question for a long time. But recently, they’ve started using a strange tool to help answer it: they build tiny artificial scientific communities inside computers, let them run, and watch what happens. These are called agent-based models (ABMs for short). Each “agent” is a stand-in for a scientist, programmed with rules for how they think, communicate, and decide what to work on. Then the philosophers sit back and see which kinds of communities succeed.
It sounds a bit like a video game. And in a way, it is. But the results have been genuinely surprising.
The Basic Idea: Growing a Science World
Here’s the core thought, from a philosopher named Joshua Epstein: “If you didn’t grow it, you didn’t explain its emergence.” He meant that if you want to understand how something complicated (like a scientific consensus) arises from lots of simple interactions (like scientists talking to each other), you should try to build it from scratch inside a computer. Watch it emerge. If you can’t get the computer to do it, you probably don’t really understand how it happens.
So philosophers build these little worlds. They decide what the scientists care about (fame? truth? both?). They decide who talks to whom (everyone? just their friends?). They decide how evidence works (do experiments give clear answers, or noisy ones?). Then they hit “run” and watch.
What they’ve found has changed how philosophers think about scientific communities.
The Problem of Too Much Sharing
One of the earliest and most famous results came from a philosopher named Kevin Zollman. He wanted to know whether scientists should be tightly connected — sharing everything with everyone — or more loosely connected.
In his model, scientists are trying to figure out which of two treatments for a disease actually works. One treatment is better, but nobody knows which. Each scientist runs experiments, learns from their own results, and also hears about results from the scientists they’re connected to. Zollman tried different network shapes: a cycle (each person talks only to their two neighbors), a wheel (one person in the middle talks to everyone), and a complete graph (everyone talks to everyone).
What he found was counterintuitive. You might expect that more sharing is always better — after all, science is supposed to be open. But in Zollman’s model, tightly connected communities sometimes did worse. Here’s why: if early experiments by chance made the worse treatment look good, that bad news spread like wildfire through the whole community. Everyone jumped on the wrong bandwagon, and nobody stuck around to explore the other option. The loosely connected communities, by contrast, kept multiple ideas alive longer, which gave the better treatment a chance to prove itself.
Zollman called this the need for transient diversity — a temporary period where different scientists pursue different ideas, even though eventually the community should converge on the best one. Too much sharing kills this diversity too early.
But later work showed this effect is fragile. It only really matters when the evidence is weak or noisy, when the community is small, and when the difference between the good and bad option is small. When evidence is clear, tightly connected communities find the truth faster. So the answer depends on context: sometimes you need more isolation, sometimes more connection.
Mavericks, Followers, and the Division of Labor
Another puzzle: what kinds of scientists should a community have? Should everyone be a careful, cautious researcher who builds on what others have done? Or should there be wild risk-takers who explore strange ideas no one else is looking at?
Philosophers Michael Weisberg and Ryan Muldoon built a model to study this. They imagined science as a landscape, with hills representing important discoveries and valleys representing dead ends. Scientists wander around this landscape trying to find the highest peaks. The model had three kinds of scientists:
- Controls who just try to find higher ground wherever they are, ignoring what others are doing.
- Followers who look at what their neighbors have already explored and go to the best spots.
- Mavericks who try to find entirely new, unexplored territory.
What they found was that a mix of followers and mavericks worked best — but only under certain conditions. Mavericks are costly (they waste time on dead ends sometimes), but they also discover things followers would never find. A community of only followers would all cluster on the same small hill and miss the big mountain. A community of only mavericks would never dig deep enough to really understand anything. But mix them together, and you get a community that both explores broadly and exploits what it finds.
This connects to a famous earlier idea from Philip Kitcher, who wondered whether selfish motives might actually help science. His thought: if scientists only care about finding the truth, they’ll all work on the most promising idea — and neglect alternatives. But if they also care about fame and credit (non-epistemic incentives), some will take risks on less popular ideas, hoping to be the first to make a big discovery. In that case, selfishness helps the community.
Later agent-based models complicated this picture. They showed that the “invisible hand” only works when scientists have good information about what others are doing — and when they’re not too biased about who gets credit. In real science, where credit often goes to well-connected people rather than first discoverers, the system breaks down.
How Minorities Can Get Squeezed
One of the most troubling findings from these models is about how inequality can emerge in science, even without anyone being deliberately unfair.
Imagine two scientists are collaborating on a project. They have to decide how to split the work. One might end up doing more than their fair share. Over many such interactions, patterns can emerge where members of a minority group consistently end up with the worse end of the deal — not because anyone is prejudiced, but simply because they’re in the minority.
Philosophers Cailin O’Connor and Justin Bruner built a model of this using something called the Nash demand game. Two agents bargain over how to split a resource. If their demands are compatible, both get what they asked for. If their demands together exceed the resource, nobody gets anything. Over time, agents learn from their interactions.
The surprising result: even when nobody is biased, minority group members can end up systematically disadvantaged. Why? Because if you’re in a minority, you interact more often with majority members than with people like you. Majority members, by contrast, interact mostly with other majority members. So majority members can afford to hold out for better deals — they have lots of similar partners to bargain with. Minority members can’t. They have to accept worse terms or risk getting nothing.
This helps explain real patterns in science, like why women and racial minorities sometimes end up with heavier teaching loads, less credit on publications, or clustering in less prestigious subfields — not because anyone is actively discriminating, but because the dynamics of collaboration themselves produce inequality.
What Should You Do When You Disagree?
One more puzzle these models address: when two scientists disagree, how should each respond? Should they “split the difference” and meet in the middle? Or should they stick to their guns?
In the philosophical debate, these are called Conciliatory Norms and Steadfast Norms. The agent-based models show that neither is always better. When evidence is clear and reliable, conciliatory communities find the truth faster. But when evidence is noisy or misleading, steadfast communities avoid being led astray by bad data. There’s a trade-off between speed and accuracy.
The right answer depends on how hard the problem is, how good the evidence is, and what you care about most. This is frustrating for anyone who wants a simple rule, but it’s probably closer to the truth about real science.
What Can We Actually Learn From Toy Worlds?
At this point, you might be thinking: “Okay, but these are just little computer games. Scientists in real life are way more complicated. What can we really learn from these toy models?”
That’s a good objection, and philosophers have argued about it. The defenders of agent-based modeling give several answers.
First, these models are like thought experiments. They show that certain things are possible — that inequality can emerge without prejudice, that sharing can hurt as well as help — in a way that a mere argument can’t. They make the logic visible and concrete.
Second, they help us understand mechanisms. Even if the model is highly simplified, it isolates a mechanism (like “the minority effect” in bargaining) that we can then look for in the real world.
Third, they can generate hypotheses that guide further research. If a model suggests that certain network structures are bad for science, we can go look at real scientific networks and see if the pattern holds.
But critics point out that these models often make unrealistic assumptions. Real scientists aren’t perfectly rational. Real evidence isn’t a clean signal. And real communities have history, politics, and emotion that no simple model captures. So what we learn from models needs to be checked against reality.
The best approach, many philosophers think, is to build families of models — many different models of the same phenomenon, using different assumptions. If they all point to the same conclusion, we can be more confident. If they disagree, we learn that the answer depends on details we don’t fully understand yet.
So What’s the Takeaway?
If you’re trying to organize a community that finds truth — whether it’s scientists, a classroom, or a team of explorers — here’s what the agent-based models suggest:
- Don’t share everything too quickly. Some isolation helps preserve diversity of ideas.
- Mix types of thinkers. You need both explorers and exploiters.
- Watch out for invisible inequality. Fair-seeming systems can produce unfair outcomes.
- There’s no one right answer. The best structure depends on the problem.
None of this is proven — philosophers still argue about how much weight to give these models. But they’ve changed the conversation. Instead of just arguing about how science should work, philosophers can now build a science world inside a computer and watch what happens. That’s a pretty strange and powerful thing.
Key Terms
| Term | What it does in this debate |
|---|---|
| Agent-based model | A computer simulation where individual “agents” follow rules, letting you watch what happens when lots of them interact. |
| Transient diversity | A temporary period where different scientists pursue different ideas, necessary for finding the best one. |
| Epistemic landscape | A way of visualizing science as a terrain with hills (good ideas) and valleys (bad ideas). |
| Bandit model | A model where scientists choose between options with unknown payoffs, like a gambler choosing slot machines. |
| Nash demand game | A simple game where two people bargain over how to split something, used to study how inequality emerges. |
| Conciliatory vs. Steadfast Norm | Two responses to disagreement: change your view toward the other person’s, or stick with what you believe. |
| Network epistemology | The study of how who talks to whom affects what a community learns. |
Key People
- Kevin Zollman — A philosopher who showed that too much sharing among scientists can actually hurt their ability to find the truth (the “Zollman effect”).
- Michael Weisberg and Ryan Muldoon — Philosophers who built an “epistemic landscape” model showing how mixing different kinds of scientists can improve discovery.
- Philip Kitcher — A philosopher who argued that scientists caring about fame (not just truth) might accidentally help science by exploring unpopular ideas.
- Cailin O’Connor and Justin Bruner — Philosophers who used game theory models to show how minorities can become disadvantaged in science even without anyone being deliberately unfair.
Things to Think About
-
The models suggest that sometimes less communication is better for a community. Can you think of a real situation where too much sharing caused everyone to make the same mistake?
-
If you were designing a scientific community from scratch — deciding who talks to whom, how credit is given, what kinds of researchers are hired — what would you do differently based on these models?
-
These models treat scientists as pretty simple: they update beliefs, talk to neighbors, pursue credit. But real scientists are also emotional, political, and influenced by things like ego and friendship. Do you think adding those complications would change the results, or just add noise?
-
One model showed that minority groups can get worse deals even without prejudice. If you were a policymaker trying to make science fairer, what would you need to know before using that result to guide real changes?
Where This Shows Up
- Online misinformation: The Zollman effect — where bad ideas spread too fast in connected networks — is basically what happens when fake news goes viral.
- Team projects in school: Should everyone work together on everything, or should groups work separately and then share results? The models suggest the answer depends on how hard the problem is.
- Diversity initiatives in science and tech: The bargaining models explain why simply “not being racist” or “not being sexist” isn’t enough — fair systems can still produce unfair outcomes.
- Search algorithms: Programs that explore the internet or design new materials face the same exploration/exploitation trade-off as the scientists in these models.