The Adjacent Possible: How Everything Evolves
Foundations #1
Imagine a house that builds itself as you walk through it. You start in a room with four doors. Open one, and three new rooms appear beyond it, rooms that didn’t exist moments earlier. Keep walking, opening doors, and eventually you’ve built a palace. But here’s the constraint: you cannot skip ahead. You must walk through each room to reach the rooms beyond.
This is the adjacent possible, and it’s one of the most powerful ideas to emerge from complexity science. Stuart Kauffman introduced the concept in the late 1990s while wrestling with a problem that should bother anyone thinking seriously about evolution, technology, or innovation: how do genuinely new things come into existence when the space of what’s possible keeps expanding as we explore it?
The naive picture of evolution imagines searching through a fixed space of possibilities. Think of all possible protein configurations as points in some vast landscape, and evolution as a blind search algorithm trying to find the good ones. But Kauffman realized this picture is wrong in a fundamental way. The space itself isn’t fixed. It grows as evolution proceeds. Each innovation creates possibilities that didn’t exist before, couldn’t exist before, because their prerequisites hadn’t been assembled yet.
Call this the adjacent possible: the set of possibilities available one step away from the current configuration. It’s everything you can reach from where you are now, but nothing further. And here’s what makes it interesting for anyone thinking about innovation, whether biological or technological. We cannot prestate what the adjacent possible contains. We discover it by moving into it.
In human domains this doesn’t mean planning is impossible. We can plan tolerably well in the near-term adjacent possible, those doors we can actually see from here. But planning precision degrades rapidly with distance. Three moves ahead, we face combinatorial explosion. Five moves ahead, we encounter unknown unknowns, possibilities that don’t exist as possibilities yet. The palace extends as we build it, and we cannot draw blueprints for rooms that won’t exist until we’ve walked through the rooms before them.
Why evolution must explore locally
Kauffman’s insight emerged from studying autocatalytic sets, collections of molecules where each molecule catalyzes the formation of others in the set. These systems can spontaneously organize into self-sustaining metabolic networks, a candidate mechanism for the origin of life. But analyzing them forced Kauffman to confront the search space problem. The numbers are staggering. Consider possible proteins. A protein of just 100 amino acids has 20^100 possible configurations, that’s 10^130 possibilities. For context, there are roughly 10^80 hydrogen atoms in the observable universe. Square that and you’re still not close to the number of possible 100-amino-acid proteins. Evolution has had maybe 10^18 seconds since life began. There literally isn’t enough time since the Big Bang to synthesize even a tiny fraction of possible proteins, much less to search that space systematically. The space is too large for exhaustive search. Period. Evolution must explore locally, moving from configurations that actually exist to nearby configurations that become accessible from there. This isn’t a choice; it’s a constraint. The only tractable strategy is to explore the neighborhood of what you’ve already built. But local exploration has a profound consequence. When ancestral lungfish developed air bladders (proto-lungs), this created the adjacent possible for land-breathing vertebrates. You couldn’t have pointed to “land-breathing vertebrate” in the space of biological possibilities before the swim bladder existed. The concept itself depends on having the prerequisite structures already assembled. The possibility space constructs itself as evolution explores it, with each innovation opening doors to further innovations that didn’t exist as possibilities moments earlier. This creates what Kauffman calls the problem of non-ergodicity. In statistical mechanics, we typically assume we can prestate the phase space, all possible positions and momenta for gas molecules, say, and then let the system evolve through that fixed space according to known laws. The biosphere doesn’t work this way. The phase space itself expands as the system evolves. We cannot list all possible Darwinian preadaptations ahead of time because the relevant macroscopic variables (wings, eyes, lungs, swim bladders) don’t exist as possibilities until their precursors have been assembled.
Kauffman calls this enablement rather than causation. One innovation doesn’t cause the next in the deterministic sense of physical law. It enables the next by expanding what becomes adjacent. The swim bladder enabled bacteria adapted to live in swim bladders. Those bacteria enabled immune responses specific to swim bladder infections. Each step opens doors that weren’t there before.
Pruning rules select which adjacent possibles get explored
But here’s the crucial point: being in the adjacent possible doesn’t mean being viable. Most adjacent configurations never get explored because they get pruned. Harold Morowitz, in his excellent book The Emergence of Everything: How the World Became Complex, emphasizes that pruning rules determine which parts of the adjacent possible actually get traversed. These rules are probabilistic, not deterministic, and they vary across domains.
In Darwinian evolution, the pruning rule is brutally simple: did it reproduce? A mutation can be one step away in sequence space and still never appear because the right mutation never happens, or happens but gets unlucky. The organism carrying that mutation gets eaten by a predator, or fails to find a mate, or dies in a drought. Adjacency doesn’t guarantee exploration. Luck and contingency matter enormously.
The swim bladder example shows both enablement and pruning at work. Air bladders enabled land-breathing vertebrates, but that doesn’t mean every fish lineage with air bladders made it onto land. Most got pruned. The ones that survived faced a gauntlet of selection pressures, environmental contingencies, competitive exclusion. What we see in the fossil record represents a tiny fraction of what was adjacent, the subset that made it through the probabilistic filter.
For technological evolution, pruning rules shift to economic and social dynamics. The first filter is adoption: will people use this? The second is profitability: can it attract enough resources for dissemination and further refinement? Brian Arthur’s work shows that technologies combine from existing components, but most combinations get pruned before they establish a foothold.
Consider Steve Jobs and the Macintosh. From a purely combinatorial standpoint, a non-QWERTY keyboard sat squarely in the adjacent possible. All the components existed. The technology was feasible. But QWERTY’s installed base created a selection pressure that made exploring that part of the adjacent possible economically suicidal. The pruning rule here was “will people adopt it?” and the answer was predictably no. Users already knew QWERTY. Retraining costs were high. Network effects favored compatibility. The non-QWERTY Mac would have been pruned by market forces regardless of its technical merits.
This reveals something important about path dependence. Early choices don’t just constrain what’s possible; they shape the pruning rules themselves. QWERTY’s dominance didn’t eliminate other keyboard layouts from the adjacent possible. It made them vastly more likely to be pruned, which amounts to nearly the same thing in practice.
Technology evolves through combinatorial exploration
Brian Arthur spent decades at Santa Fe Institute working alongside Kauffman, and he recognized that technological evolution operates through the same basic dynamics of adjacency and enablement. Both biological and technological evolution work primarily through recombination of existing components. The difference is that technology makes this pattern more visible.
Open a jet engine and you’ll find compressors from industrial blowers, turbines from electrical generators, combustion systems refined through decades of internal combustion engine development. The jet engine didn’t descend from the internal combustion engine through gradual variation. It required different physical principles. But it combined existing technological components into a novel configuration that exploited those principles. Biology does the same thing; wings combine existing skeletal elements, feathers, muscles, and metabolic systems into novel configurations. Technology just makes the borrowed parts easier to see.
Arthur calls this combinatorial evolution. Most advances in both domains are innovation, clever recombination of what already exists. Occasionally you get genuine invention, truly new components. In technology: the microprocessor, originally for industrial controllers but soon adapted for vastly more. In biology: the metabolic pathways that made oxygen useful rather than poisonous, photosynthesis, the eukaryotic cell. These inventions then become components available for future innovations.
At any moment, the set of possible new technologies is determined by what components already exist and can be combined. YouTube succeeded rapidly in 2005 because broadband infrastructure, video codecs, Flash and HTML5, server farms, and payment systems already existed. The platform sat squarely in the technological adjacent possible. Try to build YouTube in 1995 and you fail, not because the idea is bad but because the prerequisites don’t exist yet.
Charles Babbage discovered this the hard way. His analytical engine design from the 1830s was sound. The same blueprints succeeded when rebuilt a century later. But Babbage couldn’t actually build the machine because precision manufacturing, reliable materials science, and the entire edifice of electrical engineering hadn’t been constructed yet. His invention sat outside the adjacent possible of his era. The pruning rule, “can we actually fabricate this?” pruned his design decades before economic viability even became relevant.
Arthur and Wolfgang Polak demonstrated this through computer simulations. Starting with only a NAND gate (the simplest logic circuit), they used random mutations to generate new circuit combinations. A fitness function rewarded circuits that performed useful operations, and selection kept the ones that worked. The system successfully evolved circuits that could add binary numbers: first half adders, then full adders (circuits that add two bits plus a carry bit), then 4-bit adders, and eventually the 8-bit adders that appeared in early Hewlett Packard calculators.
The critical finding: you cannot jump directly from a NAND gate to an 8-bit adder. The complex circuits emerged only when the fitness function rewarded simpler functions as intermediate stepping stones. First you need circuits that add single bits. Then you can combine those into 4-bit adders. Only then can you build 8-bit adders. Each level requires the previous level to exist in the adjacent possible before the next level becomes accessible. Skip the intermediate steps and you get nothing
This explains why simultaneous discovery is so common in science and technology. When oxygen was discovered independently by Priestley and Scheele, when calculus was invented independently by Newton and Leibniz, when von Kleist and Cuneus both invented the electrical battery, this wasn’t coincidence. It was structural inevitability. Multiple people working from the same knowledge base explore the same adjacent possible. The relevant innovations sit just beyond the current frontier, accessible to anyone positioned correctly. And the pruning rules (does it solve a recognized problem? can you demonstrate it? will others recognize its value?) operated similarly on all the competitors.
Cities accelerate by expanding the adjacent possible
Luis Bettencourt’s work on urban scaling laws reveals another domain where the adjacent possible matters: cities create innovation through collision. When you double a city’s population, you need only 85% more infrastructure (roads, gas stations, electrical cables) but you get roughly 115% more innovation, patents, wages, and economic output. This is one of the most robust patterns in urban science, holding across thousands of cities worldwide regardless of nation or culture.
Why? Because cities create what we might call collision spaces, dense networks where diverse elements combine. Each person, technology, institution, or idea in a city is potentially adjacent to vastly more recombination partners than in smaller settlements. The adjacent possible doesn’t just expand in cities; it explodes.
Consider Renaissance Florence. When Brunelleschi, Ghiberti, Donatello, and Masaccio worked blocks apart, sharing techniques and competing for commissions, they didn’t simply produce more art. They created adjacent possibles for new artistic techniques, patronage structures, architectural methods, and ultimately the entire conceptual apparatus of Renaissance humanism. Each innovation opened doors to combinations impossible a generation earlier.
But even here, pruning rules mattered. Florence’s particular economic structure (banking, wool trade, guild organization) and social structure (patronage networks, republican government, humanist education) created selection pressures that pruned certain artistic directions while favoring others. Not every adjacent artistic innovation got explored. Those that aligned with patrons’ tastes, guild requirements, and religious sensibilities were more likely to attract resources and survive.
The same dynamics explain Silicon Valley, medieval Baghdad, ancient Athens, Han Dynasty Chang’an. Dense urban networks don’t just concentrate existing elements. They accelerate the rate at which new elements become adjacent, accessible, combinable. But venture capital availability, regulatory environments, talent migration patterns, and market access shape which parts of that expanded adjacent possible actually get explored.
Consider how pruning rules shape outcomes even when starting conditions look similar. In 1980, Boston’s Route 128 corridor and Silicon Valley were roughly equal players in technology. Both had strong universities, venture capital, engineering talent, and dense professional networks. But California law prohibited non-compete agreements while Massachusetts enforced them. Engineers in Silicon Valley could leave one company and join or start another, carrying knowledge and relationships with them. In Boston, non-compete clauses locked talent in place. This seemingly minor legal difference created radically different pruning dynamics. Ideas that failed at one Valley company could be recombined at the next. Talent could flow to where adjacent possibilities looked most promising. Boston’s pruning rules were harsher; fewer recombinations survived to be explored. By 2000, Silicon Valley had left Route 128 in the dust. Same adjacent possible, different pruning rules, divergent outcomes.
What this tells us about planning and foresight
If even cities with similar resources and similar starting conditions diverge based on their pruning rules, then planning requires understanding both what’s adjacent and what gets selected. Which brings us to the question: how far ahead can we actually plan?
The adjacent possible clarifies why planning horizons degrade with distance. In the near term, maybe one or two moves ahead, we can plan with reasonable precision. We can see the doors from where we stand. We can estimate which combinations are likely to work, which pruning rules will apply, which paths look promising.
But precision degrades rapidly as we look further ahead. Three moves out, combinatorial explosion sets in. With N components available, potential combinations grow exponentially. Five moves out, we encounter unknown unknowns. The relevant possibilities don’t exist as possibilities yet. We’re trying to plan for rooms that won’t come into existence until we’ve walked through several rooms we can barely see.
This isn’t a failure of imagination or computational power. It’s structural. The configuration space constructs itself as we explore it. The doors don’t exist until we’re in position to open them. Babbage couldn’t plan for the personal computer revolution because “personal computer” wasn’t in anyone’s adjacent possible until semiconductors, transistors, integrated circuits, and mass manufacturing had been built. You cannot plan for what literally doesn’t exist as a possibility yet.
But we can plan at the edge. We can identify which components exist now. We can estimate which combinations are adjacent. We can reason about pruning rules, which innovations are likely to survive economic selection, social acceptance, technical feasibility testing. This is still valuable planning, just with honest acknowledgment of its limits.
The Apollo program shows planning at the edge working. Kennedy committed to the moon in 1961 when much of the required technology didn’t exist. But it was all in the adjacent possible or one step beyond. Computers, rockets, life support, navigation, materials science, the foundations existed. The missing pieces were adjacent to what NASA already had. The pruning rules were clear: does it work in testing? can we build it reliably? will it function in space? The planning worked because it stayed within or just beyond the visible adjacent possible.
Contrast that with fusion power, perpetually thirty years away. Some of the required technologies keep receding beyond the adjacent possible. Superconducting magnets weren’t adjacent in 1950 or 1970 or 1990. Plasma physics understanding wasn’t adequate. Materials that could withstand the neutron flux didn’t exist. Each advance opened new adjacent doors, but also revealed further requirements that were themselves beyond the then-current adjacent. Planning for fusion has been planning for rooms we still cannot see, and it shows.
Path dependence through pruning
Brian Arthur’s work on increasing returns showed how small early advantages compound through positive feedback. QWERTY keyboards, VHS tapes, Windows operating systems, these lock-ins aren’t accidents and they aren’t inevitable. But the adjacent possible framework, combined with pruning rules, explains how path dependence actually operates.
Early choices don’t determine outcomes; many paths remain open. But they do reshape the pruning landscape. QWERTY’s early adoption meant that alternatives faced much higher pruning pressure. The Dvorak keyboard was technically superior by multiple measures, but it had to overcome adoption costs, training costs, compatibility expectations. The pruning rule “will people adopt it?” became much harder to satisfy for alternatives. Path dependence works through differential pruning of the adjacent possible.
VHS versus Betamax shows the same pattern. Both were technologically feasible. Both sat in the same adjacent possible. But different decisions about licensing, recording time, and market positioning created different pruning pressures. VHS survived the economic pruning rules; Betamax didn’t. The technology was adjacent, but viability required making it through the gauntlet of market adoption, retailer support, content availability, network effects.
This has implications for anyone trying to introduce innovations. It’s not enough to have a technically feasible idea that sits in the current adjacent possible. You need to understand the pruning rules in your domain. For consumer technology: will people adopt it? Is the value proposition clear? Are switching costs reasonable? For scientific tools: does it solve a recognized problem? Can you demonstrate it? Will others be able to reproduce it? For business models: is it profitable enough to attract resources? Does it threaten powerful incumbents who can mobilize against it?
Brilliant ideas fail not just because they’re outside the adjacent possible, but because they underestimate the pruning rules that will select against them.
Why this matters for understanding complex systems
The adjacent possible does something unusual in complexity science. It provides a framework for understanding processes where we cannot specify the state space in advance. Traditional phase space approaches in physics assume we can list all relevant variables beforehand. For gas molecules, that’s positions and momenta. The math works because the variables are knowable.
But for the biosphere, technosphere, and cultural evolution, this assumption fails. The relevant variables themselves emerge during the process. Wings and semiconductor junctions and double-entry bookkeeping don’t exist as possibilities until their precursors have been assembled. The phase space constructs itself as we explore it.
Kauffman’s concept, enhanced by Morowitz’s emphasis on pruning rules, offers intellectual scaffolding for thinking about such systems without requiring omniscient specification of possibilities. We can identify principles governing exploration: local moves, combinatorial assembly, enablement rather than causation. We can identify pruning mechanisms: reproductive success in biology, economic viability in technology, social acceptance in cultural evolution. We can predict statistical patterns: Heaps’ law, Zipf’s law, correlation signatures. But we cannot predict specific innovations, because they don’t exist as possibilities yet.
This is intellectual humility of the useful sort. It recognizes the limits of predictability while identifying genuine regularities. It distinguishes between pattern and outcome, between process and product. We can understand how innovation works without being able to predict what innovations will emerge. We can understand how pruning works without being able to predict which specific innovations will survive. The adjacent possible explains why planning degrades with distance while still being useful at the edge.
Planning at the edge of the possible
Four billion years ago, a carbon atom had perhaps a few hundred molecular configurations available. Today that same carbon atom can be part of a T-cell receptor, a silicon carbide transistor, or the ink in these words. The adjacent possible hasn’t just expanded; it’s exploded into configurations unimaginable at earlier stages. And we cannot, even in principle, predict what configurations become possible three or five or ten moves ahead.
But we can plan at the edge. We can identify what’s adjacent now. We can estimate which combinations are likely to work. We can reason about pruning rules and selection pressures. We can move systematically into the adjacent possible, learning as we go, adjusting as new doors appear.
The systems that do this well, whether biological lineages or technology companies or scientific research programs, share certain features. They maintain enough diversity to explore multiple adjacent doors simultaneously. They have pruning rules that select for viability without being so harsh that nothing survives. They build on existing components rather than trying to leap too far ahead. They recognize that some adjacent directions will be pruned, and they don’t over-commit before testing.
This is why venture capital portfolios work the way they do. Most investments will be pruned by market forces, technical challenges, competitive dynamics. But the portfolio approach explores multiple adjacent doors simultaneously, recognizing that we cannot predict which specific innovation will succeed but we can predict that the statistical patterns of innovation will produce some successes. It’s planning at the edge: informed by understanding of the adjacent possible and pruning rules, but honest about the limits of foresight.
The palace extends as we build it. We cannot draw complete blueprints. But we can see the next room, estimate its dimensions, reason about whether it’s worth entering. That’s not perfect planning, but it’s the kind of planning that actually works when the possibility space constructs itself as we explore it.
That’s the adjacent possible, and that’s why planning works at the edge but fails in the distant future.



Wonderful read! One immediate application of this on the more human level is networking. Meeting people today unlocks doors tomorrow that you could never have predicted.
Seems to be somewhat related to Christopher Alexander's Nature of Order with Structure Preserving Transformations.