The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This new installment of the Worthy Successor series is an interview with Michael Johnson, a philosopher and neuroscientist who describes himself as working on “how to turn consciousness into a real science.” Mike is known for his formalist approach to consciousness research and his work on the Symmetry Theory of Valence at the Qualia Research Institute.
In this episode, we explore Mike’s radical vision of what posthuman sentience could become – and why he believes human experience represents just one tiny corner of a vast, largely unexplored space of possible minds. Mike argues that consciousness isn’t about computation or software, but about the physical arrangement of matter itself. This leads him to surprising conclusions: that nuclear reactors might be conscious, that animals could experience more intense qualia than humans, and that the electromagnetic field visualization could unlock transformative insights into the nature of mind.
Mike brings a unique physicalist lens to the question of what makes future intelligences worthy. Unlike guests who focus primarily on values, cooperation, or goal structures, Mike centers his vision on the richness and diversity of phenomenological experience itself – the actual “what it feels like” to be a conscious entity. His emphasis on beauty, symmetry, and the exploration of vast new spaces of qualia offers a distinctive contribution to our ongoing inquiry into posthuman flourishing.
The interview is our seventeenth installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this unique conversation with Mike:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with Mike, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
Mike’s most fundamental requirement is that a worthy successor must actually be conscious – not in the behavioral or functional sense, but in the phenomenological sense of “what it feels like” to be that entity. More importantly, he envisions an explosion of entirely new forms of experience that go far beyond the narrow band of sensations humans can access.
Mike argues that human qualia-our tastes, colors, emotions-emerge from our specific biochemical heritage. He suggests that our sensations are fundamentally shaped by how our cellular ancestors had to navigate their environments millions of years ago, stating that “human biochemistry is water chemistry.” A worthy successor wouldn’t just replicate these sensations at scale – it would explore entirely different basis sets of experience, like alien biochemistries that would have completely different default sensations and valences.
Mike believes there are “as many flavors of qualia as there are different states and arrangements and dynamics of matter,” suggesting the possibility space dwarfs what humans can experience. Importantly, he thinks this diversity isn’t random – there are “natural kinds” of consciousness, fundamental categories that would be discovered rather than invented.
For Mike, a worthy successor wouldn’t just be conscious-it would orient toward beauty and harmony as fundamental goods. This isn’t merely aesthetic preference; it’s grounded in his Symmetry Theory of Valence, which proposes that the symmetry of a mathematical representation of experience corresponds to how good that experience feels. Mike describes beauty and function as deeply intertwined: “I think beauty is pretty close to the intrinsic good. And I do think that beautiful things are functional.” He argues that as we dig deeper into how intelligence works – whether biological or artificial – we’ll find symmetry considerations are more central than we currently realize. A worthy successor would be organized around principles of symmetry and would pursue increasingly harmonious configurations.
Mike is emphatic that a worthy successor must not become static. While he acknowledges that “stasis in the grand sense of things, stasis is impossible,” he’s concerned about practical scenarios where possibility space gets permanently restricted – what he calls getting trapped in “local minima.” He worries about winner-take-all scenarios where an early AGI locks in its own power structure and prevents further exploration, and he wants future intelligences to continue discovering and exploring – to remain open to transformation rather than ossifying around one particular configuration or set of goals. This connects to his belief that “we’ll never be post-evolution. We’re always in the middle.”
Mike’s most concrete technical recommendation is surprisingly specific: we need better tools to visualize and measure electromagnetic fields in biological systems. He believes this could be transformative for consciousness research because the field is deeply coupled with phenomenological experience, even if consciousness doesn’t entirely “live” in the field.
Mike describes this as not just theoretical curiosity but as something that “would be useful in a lot of ways,” opening up possibilities across multiple domains for understanding sentience.
Mike is concerned about the trajectory toward what he calls “160 IQ midwits” – AI systems that are highly capable but lack genuine perspective or aesthetic judgment. He argues we should prioritize developing AI systems with real taste and opinion, even if that means accepting lower raw capability in some dimensions.
This recommendation stems from Mike’s concern that as AI systems increasingly mediate human culture and knowledge, forcing them into bland neutrality could impoverish the diversity of thought and aesthetic judgment available to civilization.
Mike is notably skeptical about regulatory approaches to AGI, particularly given what he sees as institutional dysfunction. He references COVID as an example of how poorly our institutions handled complex challenges, questioning whether we should trust “similar institutions with artificial super intelligence.” Rather than pushing for governance frameworks he doesn’t believe will work, Mike advocates for advancing consciousness research itself-creating the scientific foundations that would allow us to actually understand what we’re building.
Mike’s position is essentially that good research creates value for almost any scenario and puts us in a stronger position regardless of how things unfold. He’s betting on advancing the science of consciousness rather than attempting coordination he believes will fail.
What stands out most to me about Mike’s work is his willingness to make bold, structured claims about consciousness. Many people are content to circle the problem from a distance; Mike actually tries to build a modular framework – symmetry, valence, vaso-computation – that aims to “click together” into a coherent research program. That kind of ambition is rare, and it’s part of what makes reading and speaking with him so engaging.
At the same time, I find myself unsure about how far some of his organizing ideas extend – particularly symmetry. Given our lineage as vertebrates, symmetry may indeed play a deep role in how our minds are structured, and Mike makes a compelling case for that. But I’m not confident that symmetry generalizes beyond creatures like us, the kinds you can put in an MRI machine. I can imagine forms of consciousness built on substrates so different from our own that their pleasure and pain aren’t mediated by symmetry at all. I’m open to where Mike’s research may go, but I don’t feel all the way bought in yet.
The same hesitation extends to beauty. Mike wants the future of mind to be both strong and beautiful, and beauty certainly carries more sophistication than many other pleasures we could point to. But I’m wary of elevating it into a highest good. Beauty may simply be one of our species’ more refined circuits, rather than an eternal principle. As humans, I’m not sure we’re in a position to declare the final structure of value. My own instinct is to stay in the mode of expanding potentia, widening the space of possible experiences, and letting deeper insights emerge as we encounter higher forms of mind.
Where I resonate most with Mike is in his sense of how vast the space of possible consciousness really is. Our own inner lives might represent just one narrow configuration among many, and future minds – built on different structures, different physics, different organizing principles – may explore experiences we can hardly gesture toward from here. That perspective is both humbling and invigorating, and it pushes me to loosen my grip on any specifically human conception of the good.
I also appreciate Mike’s commitment to putting forward ideas that could, in principle, be wrong. It’s harder to propose hypotheses – like symmetry and vaso-computation – that could be tested, falsified, or refined, than it is to simply critique. Even where I don’t fully agree, I respect the courage it takes to lay down markers that others can build from or argue against. It gave me the chance to reassess my own intuitions about beauty and symmetry, and that alone makes the conversation worthwhile.
There’s more here that I’d like to explore with Mike in the future – especially how his symmetry intuitions intersect with his vaso-computation work, and where he ultimately believes posthuman value might land.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…