Toby Ord – Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)

Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity. Toby is one of the world’s most influential thinkers on long-term risk – and one of the clearest voices on how advanced AI could shape, or shatter, the trajectory of human civilization.

Before turning to existential risk, Toby’s early work focused on ethics and global poverty. Today, his research zeroes one more grand and existential questions relating to trends in AI, with a sober take on how these trends might be taken into account by policymakers and leaders.

In this episode, Toby unpacks the evolving technical and economic landscape of AGI – particularly the implications of model deployment, imitation learning, and the limits of current training paradigms. He draws on his unique position as both a moral philosopher and a close observer of recent AI breakthroughs to highlight shifts that could alter the pace and nature of AGI progress.

I hope you enjoy this conversation with Toby:

Subscribe for the latest episodes of The Trajectory:

Below, I’ll summarize Toby’s main points from each of the four sections of our interview.

AGI Governance Q-and-A Summary – Toby Ord

1. What should AGI governance attempt to do?

Toby believes the core purpose of governance is to prevent existential catastrophe. He estimates that the chance of unrecoverable failure from AGI is over 10% – and warns that we could lose our place in shaping the future altogether if machine intelligence surpasses us without alignment.

He argues that handing over control to the first capable system we build would be a mistake. Governance, he says, must ensure we don’t rush into irreversible decisions – and that we create systems worth passing the torch to.

2. What might AGI governance look like in practice?

When it comes to practical solutions, Toby points to a coalition of countries as a promising path forward. He imagines a model that’s open and growing, where nations earn a seat at the table by contributing resources and agreeing not to pursue secret national AGI efforts.

This kind of international project wouldn’t just distribute responsibility – it could also offer legitimacy and trust across borders. For Toby, global risks call for global governance, and any long-term solution needs to reflect that scale.

3. What should innovators and regulators do now?

Toby draws a sobering comparison between today’s AI race and the Manhattan Project. When he first wrote The Precipice, competition was mostly between research-driven labs like DeepMind and OpenAI. Now, that race has grown more intense – fueled by trillion-dollar companies and dominated by commercial pressure. “A much smaller fraction of it is the noble ideas,” he says.

In this environment, he believes one of the most important levers may be collective action from within the scientific community itself. He points to the Bulletin of the Atomic Scientists – a post-war coalition of researchers who helped develop nuclear weapons and then joined forces to prevent catastrophe. Toby imagines a similar possibility for AI: researchers across companies and borders agreeing not to build certain kinds of systems, or refusing to cross dangerous red lines. “Individually, they don’t have that much power,” he says, “but collectively, they would.”

He notes that professional bodies like AAAI might offer a foundation for this kind of coordination, and if the community spoke with one voice, it could help shape norms and expectations before disaster strikes.

As for governments, Toby says they must formalize oversight. Voluntary commitments from labs aren’t enough. States need clear visibility into what systems are being built, what their capabilities are, and how to halt development in an emergency. “At the moment, it’s not clear what would happen if there was… some AI catastrophe,” he says. He argues that national governments must develop clear, actionable powers – a kind of “in case of emergency, break glass” protocol – to intervene if needed.

Follow The Trajectory