The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Max Tegmark, MIT professor, Founder of the Future of Life Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris.
Max has been thinking about AGI for longer than most (his 2014 TEDx on machine consciousness is excellent), and I consider Life 3.0 to be well ahead of its time in terms of considering possible AGI futures (though they were all framed in an overtly anthropocentric way), way before doing such things was mainstream.
In this episode Max shares his insights on:
I hope you enjoy this conversation with Max Tegmark:
Max offers a thought-provoking perspective on how AGI development might force key decision-makers (particularly in military and geopolitical spheres) to acknowledge the gravity of the technology and reach some form of international consensus.
While I appreciate his optimism, I remain skeptical that the slow and steady progress of AI will naturally ring alarm bells. If anything, the past decade has shown that we are checking off many milestones of general intelligence without triggering widespread urgency.
That said, if organic recognition of AGI’s risks is not guaranteed, what deliberate efforts could catalyze the right kind of international dialogue?
One approach is the scary demo – high-impact demonstrations of AI capabilities that highlight potential risks, such as deepfakes or autonomous weaponry. Slaughterbots is a good example of a short film that takes this approach.
However, there may be other levers to pull. For example:
Ultimately, there is an opportunity to formalize a “grocery list” of strategic interventions—scary demos, envy-based leverage, and other mechanisms—that could spark collaboration on AGI governance. The question remains: What fuel should we burn to achieve alignment, and do the ends justify the means? These are open questions worth deeper exploration, and I look forward to furthering the conversation.
Lastly, I appreciate Max being frank about there being possible futures where humans don’t even want to eternally control AGI, and that a positive long-term future would involve vastly posthuman entities doing posthuman things (expanding the flame of life / potentia into the multiverse).
Given his position as a policy thinker and AGI safety advocate (running a rather substantial institute), we can expect narratives like Keep the Future Human, which overtly paint humanity (and maybe other biological life) as the sole morally relevant entities in the known cosmos. Its hard to raise funds or rally the public for anything that even admits of the post-human.
While I might differ from Max in believing that posthuman futures should be discussed openly even ahead of AGI, I do agree with him completely that our first priority should be global coordination to make sure we don’t hurl an unworthy AGI successor into the world.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
Hugo de Garis is one of the first AGI thinkers that I came across in 2012, when I decided to focus my life on the post-human transition. Aside from Bostrom…
Once the grand vision of artificial general intelligence (or the general importance of AI Power in the years ahead) is seen or, there are only two main responses from extremely…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…