Max Tegmark – The Lynchpin Factors to Achieving AGI Governance [The Trajectory Series 4: AI Safety Connect, Episode 1]

This is an interview with Max Tegmark, MIT professor, Founder of the Future of Life Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris.

Max has been thinking about AGI for longer than most (his 2014 TEDx on machine consciousness is excellent), and I consider Life 3.0 to be well ahead of its time in terms of considering possible AGI futures (though they were all framed in an overtly anthropocentric way), way before doing such things was mainstream.

In this episode Max shares his insights on:

  • The most important factors for catalyzing international (esp. US and China) coordination around preventing rogue and uncontrollable AGI – including his framework for scoring risk, and his understanding of how different stakeholders (from researchers to military leaders) play a role in bringing about international agreement (see Max’s Keep the Future Human page here, including his A.G.I. risk framework, which is outlined in the interview itself).
  • Why even long-term vastly posthuman futures should absolutely start with some degree of global coordination and human safety.

I hope you enjoy this conversation with Max Tegmark:

Takeaways and Concluding Thoughts

Max offers a thought-provoking perspective on how AGI development might force key decision-makers (particularly in military and geopolitical spheres) to acknowledge the gravity of the technology and reach some form of international consensus.

While I appreciate his optimism, I remain skeptical that the slow and steady progress of AI will naturally ring alarm bells. If anything, the past decade has shown that we are checking off many milestones of general intelligence without triggering widespread urgency.

That said, if organic recognition of AGI’s risks is not guaranteed, what deliberate efforts could catalyze the right kind of international dialogue?

One approach is the scary demo – high-impact demonstrations of AI capabilities that highlight potential risks, such as deepfakes or autonomous weaponry. Slaughterbots is a good example of a short film that takes this approach.

However, there may be other levers to pull. For example:

  • The Envy Demo – Could highlighting the lack of regulation in AI compared to industries like finance or life sciences create pressure for global coordination?
  • Stakeholder Expansion – Beyond the US and China, are there underutilized stakeholders (corporations, regulatory bodies, or specific industry leaders) that could serve as catalysts for cooperation?
  • Trigger Mapping – What specific events or advancements could realistically drive international action, and who is actively working on identifying these triggers?

Ultimately, there is an opportunity to formalize a “grocery list” of strategic interventions—scary demos, envy-based leverage, and other mechanisms—that could spark collaboration on AGI governance. The question remains: What fuel should we burn to achieve alignment, and do the ends justify the means? These are open questions worth deeper exploration, and I look forward to furthering the conversation.

Lastly, I appreciate Max being frank about there being possible futures where humans don’t even want to eternally control AGI, and that a positive long-term future would involve vastly posthuman entities doing posthuman things (expanding the flame of life / potentia into the multiverse).

Given his position as a policy thinker and AGI safety advocate (running a rather substantial institute), we can expect narratives like Keep the Future Human, which overtly paint humanity (and maybe other biological life) as the sole morally relevant entities in the known cosmos. Its hard to raise funds or rally the public for anything that even admits of the post-human.

While I might differ from Max in believing that posthuman futures should be discussed openly even ahead of AGI, I do agree with him completely that our first priority should be global coordination to make sure we don’t hurl an unworthy AGI successor into the world.