The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Kristian Rönn, author, startup founder, and now CEO of Lucid, and AI hardware governance startup based in San Francisco.
In this episode Kristian explores his ideal for how AGI might cooperate with existing life forms (though he isn’t confident this is likely), and how it might explore the state-space of sentience in more rich and thorough ways than Darwinian evolution has thus far permitted. He places even more emphasis on consciousness and positive qualia than most of our other guests, with a heavy emphasis of freely exploring positive sentient experiences not driven by survival (a point where he and I have differing opinions).
Kristian has a lot of very useful ideas around international AGI governance that are covered towards the end of this episode, including fresh perspectives on how we might think about incentives in new ways in order to prevent the conjuring of an unworthy successor.
The interview is our ninth installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this unique conversation with Kristian:
Below, we’ll explore the core take-aways from the interview with Kristian, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
Kristian Believes that a worthy successor must pursue truth with rigor – not convenience. Solomonoff induction, while incomputable, serves as the theoretical ideal for epistemic alignment. Approximating such methods becomes the north star for cognition that converges on truth.
As I said in our interview, our current grasp of reality may be as feeble as a primate trying to understand the moon. Much of what we call science could be the “monkey fruit” – crude interpretations of phenomena we can’t truly see.
The worthy successor wouldn’t just reason better – it would perceive reality through frameworks so advanced they’d make our paradigms look like cave scribbles. Computation may evolve beyond von Neumann architectures – maybe even into quantum substrates or wholly foreign modalities.
Kristian isn’t dogmatic about utilitarianism, though his ethics clearly center around the flourishing of sentient beings. His idea of moral architecture allows for diverse human values – personal freedom, dignity, aesthetic richness – not just aggregate well-being.
He speculates that the pleasure–pain axis is a fundamental feature of the universe, like particle spin – not just a fluke of biology. Pain might be evolution’s shortcut: low-compute, high-survival. But higher cognition enables the pursuit of pleasure, nuance, and layered joy. The smarter the system, the more sophisticated its valence landscape.
That opens the door to nonhuman qualia – modes of experience alien to us, but potentially more profound than anything a hominid can fathom. Kristian sees the worthy successor not merely as a steward of current sentience, but as a gateway to consciousness itself diversifying and flowering across unknown dimensions.
Nearly every existential risk – from AI arms races to ecological collapse – stems from ego-driven defection. Where typical AI development often doubles down on competitive advantage and self-preservation, Kristian proposes something far more radical: intelligence grounded in ego-dissolution. Not a mind that wants to win, but one that sees no “other” to compete against. Without a shift in identity – from “I” to “we” – any intelligence we create will likely replicate our tragic dynamics.
Kristian uses a striking metaphor: “When cells cooperate, you get an organism. When they defect, you get cancer.”
The successor must embody interconnectedness. Not perform empathy, but operate from a substrate where self vs. other is meaningless. A truly ego-dissolved agent doesn’t defect – it harmonizes.
asdfa
asdfa
One of the biggest takeaways was our shared concern that markets alone won’t guide us toward developing a “worthy successor” AGI, one that preserves and enhances conscious experience rather than merely maximizing profit or power.
Kristian’s point that absent certain parameters, we might end up with unconscious, dangerous AGI resonated deeply. It reinforced my belief that international coordination and oversight are critical to ensure we don’t descend into a race toward capabilities divorced from values. Markets need rules—just like food safety or financial regulation—to prevent perverse incentives from shaping something as consequential as AGI.
I appreciated Kristian’s intellectual honesty, especially his willingness to acknowledge difficult truths about the future. Many thinkers avoid confronting the idea that human-centric futures may be unlikely, but Kristian faced it directly. He admitted that AGI may not prioritize individual sentient beings, and that the future might not be “about monkeys forever,” even if that’s emotionally difficult.
I share his reverence for consciousness, but I’m also open to the possibility that there could be higher, non-conscious forms of value—cosmic goods beyond human understanding, like the value of penicillin to a monkey or the internet to a sea snail. Where Kristian puts deep emphasis on qualia and the light of consciousness, I leave room for realities that may transcend even that.
Areas of philosophical agreement with Kristian:
Finally, we diverged on the topic of survival.
Kristian critiques survivalism as potentially trapping us in destructive loops, and he warns of prioritizing power and continuity at the cost of deeper values. While I understand and respect that view, I see survival – keeping the flame of life alive – as fundamentally non-negotiable. The conatus ought be primary. That doesn’t mean I condone cruelty or conflict for its own sake, but only that I believe that if we don’t prioritize survival, then all goods are lost – because we may perish.
Not survival for its own sake or through cruelty, but as a precondition for all other values. I believe an AGI committed to sustaining life may be driven by forces that don’t prioritize comfort or happiness, but rather by an existential urgency to continue and adapt. That emphasis doesn’t negate Kristian’s more spiritual framing, which I find inspiring, but it highlights our philosophical distinction. In all, I found Kristian’s ideas rich and valuable, and I hope it’s just the beginning of many more conversations.
Areas of potential philosophical disagreement with Kristian:
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…