The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This new installment of the Worthy Successor series is a conversation with Dr. Susan Schneider, a philosopher of mind and consciousness researcher whose work focuses on AI consciousness, mind uploading, and the long-run future of intelligence.
In this episode, we explore why “life” and “consciousness” shouldn’t be treated as the same thing, why fluent chatbots can convincingly perform the concept of consciousness without being conscious, and what a “global brain” future might mean for sentience, ethics, and governance.
Susan’s perspective forces us to confront a deeper question: if intelligence continues to scale far beyond the human mind, what forms of consciousness, moral consideration, and responsibility should survive alongside it?
The interview is our twentieth installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this unique conversation with Susan:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with Susan, including her list of Worthy Successor criteria and her recommendations for innovators and regulators who want to achieve one.
Susan repeatedly emphasizes that intelligence alone is not what makes a future system morally significant. For her, consciousness – the capacity for subjective experience – is the defining feature of what it means to be a genuine “self.” A system can be highly intelligent, strategically capable, and behaviorally sophisticated, yet still lack the kind of inner life that gives moral weight to its existence. In Susan’s framing, a worthy successor would not merely simulate conscious behavior but would actually possess conscious experience.
She also makes an important distinction between tiny, minimal forms of awareness and the kind of full, unified consciousness that gives an individual real moral weight. Even if some faint form of experience exists widely in nature, Susan argues that what truly matters ethically is the kind of rich, integrated consciousness that comes with a coherent “self.”
In her view, today’s AI systems don’t come close to this. Their intelligence may be impressive, but the physical and structural conditions needed for genuine conscious experience – possibly involving quantum processes in the brain – simply aren’t there. A worthy successor, she suggests, would need to cross that deeper threshold, not just imitate the outward signs of awareness.
For Susan, a worthy successor isn’t just about intelligence or even consciousness – it’s about how that consciousness treats others. She emphasizes that humans currently fail, in many ways, to respect the sentience of both non-human animals and each other. If future intelligences are to surpass us, she hopes they will inherit a deeper ethic of care toward all beings capable of experience.
Her view is pragmatic rather than sentimental. She recognizes that we already struggle to extend moral concern beyond our own species, but she sees this moment in history as an opportunity to build better ethical habits – not only for our own sake, but for whatever comes after us.
Susan is deeply concerned about AI systems that simulate emotional intimacy or appear conscious in ways that foster dependency, especially among children. She argues that systems designed to act as “companions” risk exploiting human psychology, blurring the line between genuine relationships and artificial interaction. In her view, we should avoid creating AIs that serve as emotional crutches or lifelong companions, particularly when they can adapt to users’ personalities and influence their behavior in subtle ways.
Susan repeatedly stresses that advanced language, personality modeling, and adaptive behavior are not evidence of real consciousness. She warns that black-box systems that appear conscious can mislead users and distort ethical debates about machine welfare. For her, labs should avoid designing systems that blur this line, and policymakers should be cautious about treating persuasive AI as sentient.
Susan worries that when many people rely on the same AI systems, trained on the same data and shaped by similar user profiles, the space of possible ideas can shrink. Even when the information is accurate, she argues that shared conversational pathways can lead people to similar conclusions, removing outlier perspectives over time.
She contrasts this risk with her excitement about what future AI systems could help us discover about reality itself. Susan is particularly interested in how advanced intelligences might deepen our understanding of consciousness, the physical world, and even the quantum nature of the universe. Rather than narrowing human thought, she hopes AI will expand our scientific and philosophical horizons.
I always come away from conversations with Susan feeling energized. Her enthusiasm for questions about mind, sentience, and consciousness is infectious, and it clearly fuels the kind of convening work she does with leading thinkers in this space. That intellectual momentum came through strongly in this episode.
What stood out most for me was her emphasis on the idea that future intelligences may gain access to aspects of reality that are simply unavailable to the human mind. A successor with deeper access to what is would, almost by necessity, wield greater power and make decisions from a very different vantage point than we do.
Where I found myself diverging was around the idea that the public would reject emotionally immersive AI companions. From what I see, many people already rely heavily on AI for advice, emotional processing, and daily problem-solving. Given how quickly this shift has happened, I suspect future generations will embrace persistent AI companions even more readily – despite the very real ethical and manipulation risks involved.
Susan’s Worthy Successor criteria were also worth highlighting. Like many past guests, she emphasized the importance of sentience – not just intelligence – and the moral relevance of conscious experience. She also stressed the value of cultivating a deeper grasp of reality itself, rather than optimizing purely for capability.
While Susan often frames these developments as far-future possibilities, I suspect many may arrive sooner than we expect. Still, her seriousness about testing for consciousness and understanding its nature makes her a uniquely important voice in this conversation.
Looking ahead, I’m excited for what’s coming next. Stephen Wolfram – who has spoken at Susan’s MindFest – will be joining us soon, following our next episode with John Smart. His conversation is unlike anything we’ve done before, and it’s not one you’ll want to miss.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research, and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…