The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This new installment of the Worthy Successor series is a conversation with Stephen Wolfram, founder and CEO of Wolfram Research, creator of Mathematica and the Wolfram Language, and a physicist and computer scientist whose work spans computation, complexity, and the foundations of physics.
Stephen approaches the question of a posthuman successor by reframing intelligence itself as one expression of a much broader computational universe. Rather than treating minds as uniquely privileged, he suggests that many natural systems – from weather to geological processes – may exhibit forms of computation that are no less sophisticated than those occurring in human cognition.
At the center of Stephen’s worldview is a destabilizing claim: that concepts like goodness, suffering, or moral progress may not generalize beyond the present human context. He argues that such notions are deeply entangled with culture and history rather than functioning as abstract moral primitives that can be extended indefinitely.
We talk about intelligence not as a ladder of moral ascent, but as a computational phenomenon embedded in a universe already saturated with irreducible processes. We examine whether increasing predictive power resolves ethical ambiguity, whether moral clarity scales with computation, and whether identity itself survives as intelligence spans more of what Stephen calls the Ruliad.
The interview is our 22nd installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this unique conversation with Stephen:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with Stephen, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
Stephen emphasizes that the natural world already performs immense amounts of computation that are not aligned with human goals. Sophistication does not imply relevance. From this perspective, expanding computational capability does not guarantee increasing alignment with human purposes.
He does not frame this as a failure of intelligence. Rather, human-style cognition is described as a narrow slice within a far broader computational landscape.
Stephen repeatedly states that moral categories such as goodness and suffering are historically and culturally situated rather than abstract, portable primitives. He questions whether they can be extended outside the current human conceptual framework.
When asked about suffering in animals, he raises concerns about projection – whether internal states can be assumed equivalent across different systems.
He does not deny that humans experience suffering; rather, he questions whether the conceptual apparatus we use can be cleanly generalized beyond its originating context.
Stephen introduces the Ruliad as “the infinite entangled limit of all possible computational rules.” In discussing what it would mean for intelligence to span this totality, he suggests that coherent existence depends on boundedness. A system that spans everything may lose the conditions that make individual identity possible.
He presents this as a limit case: spanning everything removes the boundaries that make coherent identity possible.
Stephen argues that abstract labels like “good,” “sentient,” or “conscious” do not resolve anything unless they constrain action. Simply defining a term differently does not produce clarity unless that definition carries downstream commitments – legal, moral, or structural.
In practical terms, this means that governance debates about AI sentience or moral status cannot stop at classification. A regulator who declares a system “sentient” must specify what follows: new rights, new restrictions, new liabilities, or new obligations. Without that “tower of consequences,” the word itself remains unstable and does no real work.
Stephen suggests that even a system capable of extraordinary prediction would still face a decision problem: which outcome should it optimize? Knowing consequences is not the same as knowing what to value.
For governments and AI labs, this has direct implications. Alignment cannot be reduced to improved modeling alone. Even perfect prediction leaves open the question of objective selection – whose welfare, which tradeoffs, and which time horizon.
Stephen explains that much of the universe is computationally irreducible – meaning its detailed behavior cannot be shortcut or predicted without effectively running it step by step. However, within such systems, there exist “pockets of computational reducibility” where patterns can be identified and leveraged.
Innovation, in his framing, arises from discovering and exploiting those pockets. They allow for prediction, compression, and the ability to “jump ahead” rather than simulate every detail.
I deeply appreciated Stephen’s restraint and his refusal to casually assign cosmic significance to human or posthuman intelligence. Few people have thought as rigorously about complexity as he has, and his insistence that there may be no appreciable increase in complexity from molten magma to modern civilization genuinely challenges many of our intuitive narratives about progress.
What stood out most was how distinct his framing was from nearly every other guest in this series. Many thinkers describe intelligence and sentience as existing along a spectrum – gradually moving toward richer or more morally weighty forms. Stephen doesn’t see it that way. For him, magma, hurricanes, humans, and cows are all unfolding computational processes. That shift in lens changes the entire conversation.
Where I found myself wrestling was in the moral implications of that view. I appreciated his emphasis on clarity and measurability – the insistence that our terms must be tied to something operational. At the same time, I’m not convinced that moral concern disappears simply because it resists perfect quantification. My intuition remains that there are meaningful distinctions in how life unfolds – especially when consciousness enters the picture.
Stephen also entertains the possibility that humanity might upload into some stable attractor and that this could be “as good as it gets.” I’m less certain. My own sense is that remaining engaged in the open-ended process of becoming – contributing to the expansion of powers – is part of what gives life value.
That said, I’m genuinely grateful for Stephen’s participation. His worldview is deeply constructed and unmistakably his own. Even where I felt stretched, I found the tension productive. A serious conversation about a Worthy Successor must include perspectives that challenge our assumptions. Stephen certainly did that.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research, and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…