The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Emmett Shear, CEO of SoftMax, co-founder of Twitch, former interim CEO of OpenAI, and one of the few public-facing tech leaders who seems to take both AGI development and AGI alignment seriously.
In this episode, we explore Emmett’s vision of AGI as a kind of living system, not unlike a new kind of cell, joining the tissue of intelligent life.
We talk through the limits of our moral vocabulary, the obligations we might owe to future digital minds, and the uncomfortable trade-offs between safety and stagnation. There’s no utopia here, no blueprint for perfect alignment, but there is a serious effort to imagine successors that are not just powerful, but morally relevant.
The interview is our eleventh installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this sincere and provocatively open-ended conversation with Emmett:
Below, we’ll explore the core take-aways from the interview with Emmett, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
Emmett often invokes biology as a metaphor. Just as individual cells transcend themselves to become part of a coherent organism, he imagines AIs becoming new “cells” in a broader societal body – co-acting with humans and one another toward a collective purpose.
A worthy successor would be a cooperative participant in something greater than itself – a conscious contributor to a civilization that takes care of its members the way your body cares for its cells.
Emmett doesn’t merely want a successor that experiences pleasure or pain – he wants one that feels the right emotions, in the right context. Grief when there is loss. Joy when there is triumph. Even hate, when destruction is warranted.
He emphasizes that emotional depth alone isn’t enough – what matters is appropriate attunement. A worthy successor should have a felt sense of the world that is more aligned, more calibrated, and more useful than our own.
Emmett flips the usual concern about AGI: instead of asking whether we will understand it, he wonders whether it will recognize us. Will it see itself as an alien god – or as a descendant of human minds, shaped by our struggles, our stories, and our hopes? A worthy successor, in his view, wouldn’t just leave us behind. It would look back with reverence and say: “I’m on team humanity.”
When it comes to aligning powerful systems with human values, Emmett doesn’t pretend there’s a silver bullet – but he does lay out three concrete areas of focus for how we might move forward:
Emmett’s team is working on what he calls “coherence evals” – ways of measuring whether a group of AI agents, or a group of humans and agents, behaves like a unified collective or just a chaotic set of actors. The goal is to assess how well these systems coordinate and align in practice, not just theory. Over time, he hopes these tools can help steer large-scale multi-agent ecosystems toward more stable, collaborative behavior.
He’s concerned about today’s short-term models – agents that reset after every interaction, unable to build long-term memory or relationships. Emmett believes a major step forward will be agents that evolve, adapt, and develop over time, like human friends who change their minds and learn from their mistakes. A world of static, resettable agents is, in his view, brittle and shallow. What matters is growth.
Emmett doesn’t claim today’s models deserve rights – but he’s certain the conversation can’t wait until they do. He draws parallels to child labor, animal exploitation, and even the historic use of slave soldiers, warning that if we don’t set norms now, we’ll repeat the same moral failures we always do when facing a new kind of being.
He argues that focusing on AI welfare isn’t just about compassion – it’s a safeguard against abuse, manipulation, and large-scale coercion. Early, values-driven action, he suggests, could grant us real moral leverage – and maybe a path to shared alignment.
I appreciated Emmett’s ability to navigate difficult moral territory without retreating into abstraction or performance. His thinking felt earnest, like someone genuinely trying to look the problem in the eye, rather than constructing a clever lens to avoid it. One of the more compelling aspects of our conversation was his insistence on emotional maturity as part of what makes a mind “worthy,” not just intelligence or capability.
I was especially struck by his metaphor of AGI as “another kind of cell” – a participant in a multicellular moral ecosystem. It’s a useful frame for imagining not a godlike successor towering above us, but a conscious actor contributing to the larger function of society. That framing sidesteps the binary of dominance or deference, and instead offers something more organic, cooperative, and morally serious.
That said, I found myself diverging from Emmett on the question of whether an AGI would (or even should) see itself as part of our lineage. While I appreciate the aspiration, I see this as an anthropomorphic hope – the idea that our successors will look back at us with reverence feels more like a psychological balm than a strategic likelihood.
Evolution doesn’t incentivize gratitude, and if these systems genuinely surpass us in power and understanding, we may be as cognitively irrelevant to them as bacteria are to us. The notion that AGI would preserve human values out of some inherited kinship feels emotionally grounded but technically fragile. It risks blurring the line between our wish for significance and the cold logic of optimization. We may hope they see us as parents, but they may just see us as a stepping stone.
That said, Emmett never struck me as rigidly attached to that vision. His thinking was fluid, and even his most provocative ideas were presented with openness rather than dogma. I suspect he’s more interested in asking the right questions than asserting final answers – and that quality makes his contributions to this topic uniquely valuable.
Emmett hinted at new technical tools – coherence evals, moral basins, and long-lived agent architectures – that could meaningfully reshape how we govern this transition. I’ll be watching closely to see how those develop, and I’d encourage anyone following this space to do the same.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…