The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This new installment of the Worthy Successor series is an interview with Francis Heylighen, a professor at the Vrije Universiteit Brussel (VUB) and director of the Research Centre St. Leo Apostel. Francis is one of the world’s leading theorists on evolution, complexity, and what he calls the “self-organizing universe.”
Francis’s work applies evolutionary thinking beyond biology. He treats evolution as a general framework for understanding how complex systems survive and adapt-from atoms and molecules to organisms, civilizations, and future intelligences. His research focuses on what he calls “meta-system transitions”: qualitative leaps where systems develop new levels of organization that create entirely new categories of existence.
Francis’s central thesis is that humanity is not the endpoint of evolution, but one temporary form in a larger creative process. Just as bacteria gave rise to multicellular organisms and animals gave rise to humans, humanity will participate in-and eventually be transformed within-what he calls a “global brain” of integrated human and artificial intelligence. This future meta-system would operate at levels of complexity and consciousness beyond current human imagination, comparable to how a bacterium cannot conceive of designing a building.
We talk about the shift from a Newtonian clockwork worldview to one of self-organization and emergence. We examine the three “stories” Francis describes-the mythical, the mechanistic, and the evolutionary-and why he sees the third as necessary for understanding current changes. We discuss what Teilhard de Chardin called the ‘law of complexity consciousness’ – the principle that evolution produces systems that are both more integrated and more aware. And we discuss his view that in 5 billion years there may be nothing recognizable as human, and why he frames this subsumption as evolution rather than extinction.
The interview is our 23rd installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this unique conversation with Francis:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with Francis, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
When discussing intelligence beyond humanity, Francis describes what he calls a “global brain,” a much larger system composed of humans and AI integrated into a symbiotic whole. He explains that control would not remain exclusively at the level of individual humans or individual AI systems, but would instead emerge at the level of this larger system. In this framework, alignment is not imposed from above, but arises naturally through interaction, as different systems tend to align when alignment is mutually beneficial.
Francis notes that the term “alignment” is often framed as a technical problem in AI discourse, but for him, alignment is “the essence of evolution.” When systems interact, he argues, they tend to align spontaneously because conflict is disadvantageous for both parties, while alignment benefits both. He adds that in the long run humans may be “subsumed at some level” and “not be recognized as human individuals anymore,” suggesting that this transformation would be part of an ongoing evolutionary process.
Francis distinguishes between current neural network systems, which he describes as being very good at learning from data, and a higher level of intelligence that would be able to generate new concepts and rules “on the spot.” He explains that symbolic AI can reason with concepts if they are provided, and neural networks can learn patterns from data, but what is missing is a system in which the learning level itself generates new concepts that the reasoning level can then use.
He situates this within a broader framework of meta-system transitions. A simple organism may operate through reflexes encoded in its genes. Animals can learn from experience. Humans, he argues, can think by imagining situations that have never occurred and reasoning about counterfactuals. The next transition, as he describes it, would involve systems that do not merely reason with culturally inherited concepts, but can actively generate new concepts and new rules without waiting for cultural evolution to produce them.
When discussing long-term evolutionary direction, Francis refers to what the theologian Teilhard de Chardin called the “law of complexity consciousness.” He describes evolution as showing an increase in both complexity and consciousness. By complexity, he means the integration of previously independent components into systems that develop synergetic relationships. By consciousness, he refers to the expanding capacity of systems to become aware of and respond to their environment.
Francis explains that integration allows systems to function better together than they would alone. In ecosystems, for example, organisms depend on one another in ways that allow the system as a whole to persist. At the same time, he describes consciousness as expanding through coordination and communication. As organisms increase in complexity-from bacteria to multicellular organisms to humans-they become aware of more aspects of their environment and better able to distinguish what is beneficial from what is harmful.
Francis describes AI development through what he calls the “AI as normal technology” paradigm. In this view, we discover what AI is useful for through testing and observation rather than attempting to predict all applications and outcomes in advance. Some applications will help, others will prove problematic, and we adjust based on what we learn.
He expects that practical constraints will naturally emerge: businesses cannot afford AI that makes serious mistakes, so they will grant autonomy only where systems prove reliable. The approach is pragmatic: try things out, see what works, and give time for the process of discovery.
Francis identifies what current AI lacks: the capacity to create fundamentally new categories and rules from data. He explains that neural network systems are very good at learning patterns, while symbolic systems are good at reasoning with concepts if those concepts are already provided. What is missing, in his view, is a synthesis in which the learning level itself produces new concepts that the reasoning level can then use.
He sees the breakthrough coming from synthesizing symbolic AI (which reasons with concepts) and neural network AI (which learns from patterns), creating systems that can generate new ways of representing reality on demand. Francis expects this development within 10 – 20 years and describes it as a higher level of intelligence.
What strikes me most about this conversation is how central the idea of the meta-system transition really is. Other thinkers have gestured at similar ideas, but Francis makes clear what such a transition actually implies – particularly how inconceivable higher-order systems may be to the layers beneath them. At the same time, he frames it clearly enough that we can begin to imagine what participating in such a transition might look like.
The shift in worldview he describes feels especially important. Moving from a static, object-centered view of reality to one that understands all as process – all as becoming – changes how we relate to the transformations underway. If change at this scale is not truly stoppable, then the question becomes whether we engage it consciously or resist it until it overwhelms us. The Meiji Restoration analogy we discussed in the episode captures this tension well: denial of modernity came at enormous cost, while adaptation – however disruptive – allowed continued participation in shaping the future. Preparing ourselves intellectually and culturally for large-scale transition seems to me a serious and worthwhile task.
Francis’s near-term hopes are grounded not in speculative futurism but in what he almost casually referred to as enlightenment values: a more educated, more open-minded, more materially secure humanity. He views the Newtonian mechanical worldview not as an error, but as a necessary developmental step – one that grounded us in greater control, humanism, and scientific capacity. From that grounding, he envisions a further elevation, ideally in symbiosis with the technologies we are now building.
He does carry a meaningful optimism about that symbiosis. He does not see artificial intelligence as something destined to run away with catastrophically misaligned goals in the near term. I’m not sure I share the full extent of that confidence – but I do find the argument serious and worth considering. If my own outlook has shifted at all recently, it has been slightly toward the possibility that this transition could unfold more constructively than many assume.
Importantly, this vision is not naive. Francis is clear that new conflicts and new disruptions will accompany new levels of organization. There is no suggestion that transition will be painless or harmonious. But there is a conviction that understanding the nature of the process itself may help us navigate it more intelligently.
I found the discussion clarifying, and I hope you did as well. There are many more Worthy Successor conversations ahead as we continue exploring what kinds of flourishing beyond humanity might be worth striving for. I’m grateful you joined us for this one.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research, and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…