The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Joining us in our tenth episode of our AGI Governance series on The Trajectory is Dean Xue Lan, longtime scholar of public policy and global governance, whose recent work centers on AI safety and international coordination. He applies the “regime complex” framework to AGI, highlighting that no single, top-down authority can govern a technology that spans multiple standards bodies, companies, and states.
Xue stresses that AGI governance must evolve as an adaptive network. The UN can set frameworks among nations, but companies, safety institutes, and industry associations also play critical roles. Only through combining these overlapping layers can governance respond to the challenges of an unprecedented technology.
The stakes are profound: absent shared evidence, contingency planning, and trust, early AI incidents could spark competition rather than cooperation. Xue argues that preparing “for the worst” together – while strengthening communication channels across nations – is essential to avoid crises becoming accelerants of division.
I hope you enjoy this conversation with Xue:
Subscribe for the latest episodes of The Trajectory:
Below, I’ll summarize Xue’s main points from each of the three sections of our interview.
Xue frames AGI governance as a regime complex – not one body, but a web of overlapping institutions that each have partial claims and responsibilities. He stresses that the UN has a critical role in convening national governments and establishing general frameworks. At the same time, national regulations are indispensable to push companies toward commitments, while industry actors themselves must step forward with concrete obligations.
He also points to progress from new institutions and commitments. The network of AI Safety Institutes is beginning to coordinate internationally, focusing on standards, testing, and a shared research agenda. Companies in both the West and China are also signing onto commitments, which can be strengthened by regulation and external pressure.
Finally, Xue notes that the AC can contribute technical tools and standards to address risks beyond loss of control, such as hallucinations or misuse of AI for bioweapons.
For Xue, governance in practice cannot be designed as a single, all-encompassing system. Instead, it should evolve adaptively, with different regions pursuing approaches that suit their contexts. He contrasts comprehensive frameworks like the EU’s AI Act with China’s incremental model, which began with foundational laws and then added targeted measures such as temporary rules for generative AI.
He also underscores how AI’s unique nature requires embedding the technology within each society’s structures – redesigning technical-social systems in ways that differ country by country.
Beyond regulation, Xue sees value in industry commitments when they involve real processes – requiring companies to disclose their practices, compete on leadership, and undergo verification. This kind of association, he argues, can create pressure both internally and externally.
On the international stage, he points to both new organizations and existing standard-setting bodies as potential venues for coordination. While acknowledging geopolitical obstacles, he even imagines the possibility of an international lab on AGI safety – a collaborative space where global talent could pool efforts.
For innovators, Xue points to the Chinese approach as a model for how company commitments can become more meaningful. In this framework, commitments are treated as ongoing processes rather than one-time pledges. Firms disclose how they are implementing their promises, submit to verification, and face scrutiny from both competitors and the public. By embedding commitments in industry associations and requiring transparency, innovators can create the kind of peer and societal pressure that makes safety practices harder to ignore.
For regulators, Xue argues that national governments must establish requirements that compel companies to participate in commitments and agreements. Regulations ensure that “bad players” are not left out, and they provide the necessary backbone for company cooperation. He emphasizes that without this regulatory support, voluntary commitments alone will not be sufficient.
He also emphasizes the importance of creating and expanding communication channels between countries – especially the US and China – and using those channels to gradually build trust. Without this, cooperation will remain limited, and the ability to coordinate during moments of crisis will be weakened.
Xue brings a rare view from inside China’s policy and governance circles, highlighting both the opportunities for cooperation and the challenges of trust between nations. His emphasis on adaptive governance and preparation for the worst adds a critical dimension to this series.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…