John Smart – Evolution from Cells to Super-intelligence (Worthy Successor, Episode 21)

This new installment of the Worthy Successor series is an interview with John Smart, Director of the EvoDevo Institute and a longtime futures theorist whose work spans accelerating change, developmental systems, and the long-term trajectory of intelligence.

John has been studying complexity, acceleration, and adaptive systems for over two decades – founding the Acceleration Studies Foundation and developing what he calls “accelerationology”: an attempt to understand why change speeds up over time, and when that acceleration is beneficial versus destructive.

In this episode, we go deep on John’s view that intelligence does not evolve randomly, but instead progresses through constrained developmental stages – from chemistry to biology to nervous systems to digital cognition. He frames AGI as another “meta-system transition,” comparable in magnitude to the emergence of cells or brains, and emphasizes that such transitions come with massive jumps in modeling capacity, speed, and agency.

The interview is our twenty-first installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.

This series references the article: A Worthy Successor – The Purpose of AGI.

I hope you enjoy this unique conversation with John:

Subscribe for the latest episodes of The Trajectory:

Below, we’ll explore the core take-aways from the interview with John, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.

John Smart’s Worthy Successor Criteria

1. Must balance Creativity, Accountability, Persistence, and Sentience (CAPS)

John frames a worthy successor around four deeply recurring properties he sees across successful evolutionary and developmental systems: Creativity, Accountability, Persistence (or protection), and Sentience – what he calls CAPS. These are not framed as uniquely human values, but as adaptive properties that systems must balance to survive, explore, and regulate themselves over time.

For John, creativity alone is insufficient. Systems that generate novelty without accountability or protection collapse, destabilize, or become destructive. A worthy successor must support unpredictable exploration while constraining that exploration within stable developmental bounds that preserve what matters.

2. Must advance through symbiogenesis rather than replacement

Rather than expecting advanced AI to simply replace humanity, John consistently emphasizes symbiogenesis – intelligence advancing through the merging and mutual incorporation of biological and technological systems. He describes futures involving empowered cyborgs and super-sentient collectives, where humans and machines co-evolve through increasingly tight cognitive integration.

Personal AIs, in his view, represent a likely early stage and most important stage of this process: private cognitive extensions that hold individual data models, support judgment and values, and gradually become persistent proxies for human identity.

3. Must retain eumortality – the capacity for “good death” rather than static immortality

A central idea in John’s thinking is what he calls eumortality – good death. He argues that healthy systems are not immortal or static; instead, they continuously prune, archive, and discard what no longer serves adaptation. Progress requires selective forgetting as much as growth.

For John, a worthy successor must retain this property: the ability to eliminate or suppress maladaptive structures while preserving what remains useful, rather than freezing itself into permanence.Without this capacity to prune and transform, intelligence risks stagnation, fragility, or collapse over long time horizons.

Regulation / Innovation Considerations

1.  Build AI through “natural alignment,” not purely top-down control

John argues that alignment cannot be achieved solely through abstract rules or engineered perfection. Instead, he advocates borrowing from biology – building systems shaped by developmental constraints, redundancy, and statistical safety in numbers.

Living systems achieve robustness not through total control, but through diversity, immune-like defenses, and the dominance of well-behaved agents over pathological ones. In this view, alignment emerges statistically and developmentally over time, rather than being guaranteed by a single correct design or set of rules.

2. Democratize AI via personal agents to prevent domination

John believes decentralized personal AIs are critical to preventing plutocratic concentration of power. He envisions individuals equipped with cognitive prosthetics that diagnose bias, guide decisions, and coordinate collective action – enabling better voting, economic choices, and civic engagement.

He sees this as a uniquely powerful counterweight to centralized tech dominance. By shifting agency and strategic coordination down to the individual level, personal AIs reduce the ability of any single actor or institution to dominate the system as a whole.

3. Treat AGI development as gardening, not engineering

John repeatedly stresses that intelligence should be cultivated like an ecosystem rather than constructed as a rigid machine. He warns against assuming godlike control, arguing instead for experimentation, pluralism, and careful stewardship – planting many variations and observing which flourish. In this model, progress comes from guiding evolutionary trajectories and pruning failures, not from attempting to specify or lock in a final optimal design.

Concluding Notes

I found John’s evo-devo framing genuinely refreshing. His insistence that development matters just as much as evolution offers a rare systems-level lens on AGI – one that connects biology, networks, and future intelligence into a coherent arc of becoming.

One idea that particularly stood out was his concept of eumortality. The notion that progress depends on selective forgetting – on the disciplined ability to prune the obsolete rather than preserve everything indefinitely – feels deeply aligned with how both brains and civilizations actually grow. It reframes “survival” not as static persistence, but as continual adaptive renewal.

Where I found myself more cautious than John was around his optimism about “natural alignment.” While I find his argument compelling – that deeply entrenched developmental dynamics like CAPS may reassert themselves across substrate transitions – I remain uncertain that these attractor states will reliably persist at electronic timescales or under adversarial pressures. Evolution does not guarantee benevolence, and increased intelligence expands both cooperative and competitive possibilities.

Relatedly, while John emphasizes that “the network always wins,” I worry this framing can underplay the urgency of near-term governance. If we are not already trending toward a worthy successor, that recognition should catalyze more than patient stewardship alone – it should sharpen our attention to present institutional and incentive failures.

That said, John holds these views lightly. Throughout the conversation, he repeatedly acknowledges uncertainty, emphasizes empirical humility, and rejects any notion of godlike control. More than anything, his work provides a set of attractor diagnostics – ways of recognizing whether we are drifting toward empowering, cooperative futures or toward domination, dysregulation, and dehumanization.

If nothing else, John Smart is the first living thinker who has meaningfully improved my own odds that we may land on a worthy rather than an unworthy successor. I’d welcome future conversations stress-testing CAPS against more adversarial dynamics and competitive regimes. For now, I’ll continue following John’s work through the EvoDevo Institute.

Follow The Trajectory