The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Joining us in our eighth episode of our AGI Governance series on The Trajectory is Craig Mundie, former Chief Research and Strategy Officer at Microsoft and longtime advisor on the evolution of digital infrastructure, AI, and national security. Over decades of experience building systems at scale, Craig has developed a view of AGI governance grounded not in abstract idealism, but in hard-won realism about how industries and governments actually function.
Craig is one of the few interviewees in this series who explicitly challenges the idea that governance should begin with regulation. Instead, he advocates for a crawl-walk-run approach led by the major labs themselves. He also offers a broader philosophical frame: that we are in a brief but crucial window of coexistence with AI, and that what we choose to build during this time may shape whether humans evolve with intelligence or are replaced by it.
In this episode, Craig and I explore how bottom-up governance could emerge from commercial pressures and cross-national enterprise collaboration, and how this pragmatic foundation might lead us into a future of symbiotic co-evolution rather than catastrophic conflict.
Never in time running interviews about AGI governance have I had someone state frankly that – at some point – we don’t know what the future trajectory of intelligence will look like, or how relevant humans will be. Craig’s frank honesty about the cosmic future is outlandishly rare, and his ideas about how to get to co-evolution seem plausible and worthy of discussion.
I hope you enjoy this conversation with Craig:
Subscribe for the latest episodes of The Trajectory:
Below, I’ll summarize Craig’s main points from each of the four sections of our interview.
Craig believes the core goal of AGI governance should be to proactively manage real, immediate risks while building the foundations for long-term safety. He warns that we’re already facing concrete threats, like the acceleration of cyberattacks due to AI – and these aren’t hypothetical. Effective governance, in his view, should begin by tackling these near-term issues, using them as testbeds to develop models of collaboration and control. This, he argues, is the only viable path to build systems that might eventually address more catastrophic risks.
Attempting to leap straight to comprehensive governance, he cautions, is unrealistic. With no existing playbook for regulating something this powerful, we have to move fast – but incrementally. “I’m saying you got to have kind of a crawl, walk, run approach,” he explains. “But you’d better get from here to a world-class sprinter in a relatively short amount of time.” The road ahead, he says, won’t be solved with one clean breakthrough – but with iterative progress that begins now.
Craig believes AGI governance, at least in its early stages, won’t come from top-down regulations imposed by governments or academic bodies. Instead, it will likely emerge from the companies building the technology through consensus, mutual interest, and eventual standardization. He compares this to the way software standards have historically evolved: not from external mandates, but from competing firms aligning around interoperability once market pressures made it necessary. With AI, he says, we’re just now reaching that inflection point.
In his view, a small group of major labs will need to proactively agree on shared guardrails they can all implement and offer to the world. That path, he argues, is far more realistic than expecting outside institutions to “cram down” rules without understanding the underlying technological negotiations required. Ideally, he says, this industry-led alignment could even extend internationally – with, for example, U.S. and Chinese labs collaborating on initial safety protocols. From there, a global regulatory framework might begin to take shape.
For innovators, Craig believes the near-term focus should be on recognizing shared market problems and responding to the mounting demand from enterprises for safe, interoperable systems. He cautions against expecting labs to surrender trade secrets or market dominance – instead, the way forward lies in identifying mutual pressure points. These include customer demands for cross-platform compatibility, regulatory compliance, and predictable behavior in increasingly agentic systems.
Craig sees the emergence of AI platforms and application layers – similar to the evolution of the PC or iPhone – as a familiar arc in tech. He predicts that enterprise buyers, especially those navigating global compliance risks and multi-AI-stack integrations, will drive innovation toward common standards. In his view, this is not just possible, but inevitable: as agents begin interacting across providers like OpenAI, Anthropic, and xAI, innovators will be compelled to create tools that ensure reliable communication and coordination. These market signals, not moral appeals, are what he expects will shape the earliest forms of functional, bottom-up governance.
For regulators, Craig argues the most productive role is not to impose top-down mandates, but to catalyze coordination through smart incentives and infrastructure. Instead of dictating rules before the underlying systems are understood, governments should focus on enabling companies to build mechanisms that allow AI products to conform to the legal and ethical standards of different regions. In Craig’s view, this kind of mutual operability – where U.S. products can operate safely in China, and vice versa – requires creating trust frameworks that are location-compliant, not developer-compliant. Regulators, he suggests, should reward the formation of shared systems that can attest to behavior based on local norms.
Ultimately, he sees a future where the job of monitoring and enforcement will itself be handled by AI – because only AI will be fast and comprehensive enough to track, interpret, and ensure conformance in real time. Rather than attempting to design this kind of oversight in advance, regulators should begin with today’s near-term problems and use them to model scalable, automated governance.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…