The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This new installment of the Worthy Successor series is an interview with David Sloan Wilson – an American evolutionary biologist, Distinguished Professor Emeritus of Biological Sciences and Anthropology at Binghamton University, and co-founder of the Evolution Institute and ProSocial World, best known for his work on multi-level selection, social evolution, and how groups succeed or fail as coherent systems.
David’s work matters deeply because it treats evolution not as a purely biological process, but as a general framework for understanding how complex systems survive, cooperate, fracture, or collapse – whether those systems are organisms, cultures, governments, or emerging artificial intelligences. His core claim is simple but destabilizing: selection does not automatically favor what’s good for the whole, and without selection operating at the level of entire systems, lower-level competitive dynamics tend to undermine coherence over time.
In this conversation, we explore what it would mean for humanity to act as a steward of evolution itself – not freezing life in its current form, but guiding its transformation toward futures that remain coherent, prosocial, and resilient over deep time. David resists sentimental futurism and instead grounds his optimism in a stark evolutionary truth: systems either learn to coordinate at higher levels, or they are replaced by systems that do.
The interview is our eighteenth installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this deeply unique conversation with David:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with David, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
For David, the defining trait of a worthy successor is not intelligence, speed, or power – but pro-social coordination at scale. A successor worthy of inheriting the future must be capable of acting on behalf of the whole system it inhabits, rather than exploiting it from within. This mirrors how healthy organisms suppress cancerous behavior among their cells.
Crucially, David emphasizes that pro-sociality does not emerge automatically. It must be selected for intentionally, or else lower-level competitive dynamics dominate. Without whole-system selection, intelligence becomes parasitic rather than generative.
David pushes back on the idea that a flourishing future means everyone converging on the same emotional states, moral intuitions, or ways of living. He argues that much of what we treat as “human nature” is, in fact, the imprint of a narrow cultural lens. A healthy future, in his view, makes room for radically different ways of thinking, feeling, and existing – without assuming that our own preferences should be the template for everything that comes next.
At the same time, David is clear that respecting diversity does not mean trying to preserve every species, culture, or form of life indefinitely. Evolution, by its nature, involves change, loss, and replacement. A worthy successor would allow new forms of life and experience to emerge – even ones that feel alien or uncomfortable to us – while recognizing that extinction and transformation are not failures of evolution, but part of how it works.
David challenges a deeply entrenched assumption in modern evolutionary thinking: that evolution must always be blind, unconscious, and unsteered. While that framing may apply to biological evolution operating through genes, he argues it breaks down once we move into the domain of culture, institutions, and governance. At those levels, humans are not merely subject to evolutionary forces – we are capable of reflecting on them, understanding their dynamics, and intervening deliberately.
From David’s perspective, this capacity for conscious stewardship is not optional. If higher-level systems fail to intentionally select for whole-system outcomes, lower-level competitive dynamics will inevitably dominate. The result is not neutral drift, but the emergence of societies that optimize for local advantage at the expense of long-term coherence. A worthy successor, then, would be able to recognize when selection is happening at the wrong level – and to consciously redirect evolutionary pressures toward outcomes that preserve system-level health.
David is explicit that the risks around AI development are not, at their core, technical failures – they are failures of worldview. He argues that the outcomes we get depend on the assumptions guiding development, and he names several common worldviews that reliably push systems in the wrong direction. Among them are hyperindividualism, which prioritizes personal efficiency over collective capability; market fundamentalism, which assumes markets automatically produce good outcomes; anthropocentric dominance, which treats humans as separate from the systems they depend on; and technological determinism, the belief that technology will turn out well regardless of how it is guided.
When these worldviews shape development, David argues, they produce what he calls the “cancer pattern.” The problem is not that parts of the system fail to grow – it’s that they succeed in the wrong way. Just as cancer cells thrive locally by reproducing faster than their neighbors while undermining the organism as a whole, systems guided by these worldviews reward narrow forms of success that ultimately degrade the larger structure they depend on.
From this perspective, progress can be deeply misleading. Rapid growth, increased efficiency, and competitive advantage may look like success when viewed in isolation. But unless the criteria for success are set at the level of the whole system, those gains predictably come at the expense of long-term coherence and collective flourishing. The core governance challenge, as David frames it, is therefore not slowing development – it is changing what gets selected for, so that success at the local level does not undermine the system that sustains it.
David argues that effective governance in complex systems cannot be handled through fixed plans or single decisions made upfront. Instead, he points to real-world cases where AI is already being deployed at intermediate scales – some failing badly, others working remarkably well – and notes a consistent pattern among the successes. In each case, there is a clearly defined system-level goal that serves as the target of selection, rather than narrow optimization for profit or efficiency.
Because these systems are inherently complex, David emphasizes that every intervention must remain provisional. Decisions require ongoing oversight, continual assessment of outcomes, and a willingness to adjust course when things aren’t working as intended. What matters most is not getting the design “right” the first time, but repeatedly checking whether the system is actually functioning well for the people and purposes it is meant to serve.
In evolutionary terms, David describes this process as a variation–selection–replication cycle: trying different approaches, comparing outcomes, retaining what works, and repeating the process again and again. These cycles, he argues, must operate at multiple levels and always involve human judgment. The core question is pragmatic and ongoing – not theoretical: Is this working for us?
Perhaps David’s most fundamental claim is that governance failures are downstream of worldview failures. Without an understanding of evolution as a multi-level process – one that applies to cultures, institutions, and technologies – regulatory tools will be misapplied, ignored, or actively undermined. From his perspective, many current policy efforts falter not because decision-makers lack good intentions, but because they lack the conceptual framework required to see whole-system dynamics clearly.
He stresses that this way of thinking is historically new, emerging only with advances in complexity science and evolutionary theory over the past few decades. As a result, many leaders simply haven’t encountered it, let alone internalized it. David suggests that broad exposure to this worldview – teaching people how selection operates across levels – may be as important as any specific regulatory mechanism. Until that happens, regulation risks treating symptoms while leaving underlying dynamics untouched.
There were several aspects of David’s thinking that I found genuinely clarifying. Toward the end of the episode, he articulated a lens that felt like a clean distillation of much of what this series has been circling: a system that is good on the inside – where individual agents are treated well – and good on the outside, in the sense that the overall system continues to flourish and expand the space of possibilities. That framing landed with me, and in many ways captures the tension this entire series is trying to hold.
I was also struck by his framing of artificial intelligence as a form of artificial evolution, rather than merely artificial intelligence. That lens feels directionally right. If AGI is to become part of the broader, unfolding process of life – rather than a dead optimizer narrowly pursuing economic or military objectives – then different criteria for success come into focus. “Intelligence” alone feels too thin a category; artificial evolution may be a more useful starting point for thinking about what we actually want to create.
That said, there were moments where I struggled to fully square this optimism with the harsher realities of evolutionary history. Higher-order coordination does emerge, but it often does so by subsuming, transforming, or displacing what came before. It’s not obvious to me that long-term flourishing implies universal inclusion, and that tension feels worth naming.
Still, I have tremendous respect for David’s project. He is clearly trying to apply these ideas in the real world, and his evolutionary lens – applied to culture, technology, and governance – is indispensable for thinking clearly about what comes next. I’m grateful for his time, and I strongly recommend digging deeper into his work.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…