Sébastien Krier – Keeping a Pulse on AGI’s Takeoff [AGI Governance, Episode 1]

Sebastien Krier of Google DeepMind joins is in our first episode in a brand new AGI Governance series on The Trajectory.

Beginning his career studying law at King’s College, Sebastien became the Head of Regulation for the UK’s Office for Artificial Intelligence in 2018. Today he works in policy and strategy development at Google DeepMind, where he has resided for the last two years.

Seb’s approach to monitoring AGI development involves a focus on capabilities and risks – as well as rethinking participatory governmental processes to be more nimble and representative. Seb sees society’s infrastructure writ large (rather than just policymakers) as being involved in the changes ahead, and shares ideas of what a more nimble process could look like.

Along the way, we unpack his desired future scenarios of man and machine – and he and I hash out his thoughts about whether AI will empower our will and agency, or diminish it. Seb is my favorite policy thinker within any of the major AGI labs, and I’m grateful he could join us as guest #1 in this series.

I hope you enjoy this conversation with Sebastien:

Below, I’ll summarize Seb’s main points from each of the four sections of our interview.

AGI Governance Q-and-A Summary – Sebastien Krier

1. How important is AGI governance now on a 1-10 scale?

5.5 out of 10.

Seb believes that it’s important to monitor, research, and hold high-level discussions on governance, but he’s cautious about rushing into new global institutions or mandates right now. His stance reflects a balanced approach, emphasizing the need to understand potential risks and capabilities incrementally rather than enforcing rigid policies prematurely.

This incremental approach seems largely to be the stance of most of the large AGI labs.

2. What should AGI governance attempt to do?

Sebastien sees AGI governance as having three primary aims:

  1. First, to prepare a strong infrastructure, workforce, and governmental capacity for managing AGI’s arrival.
  2. Second, to prioritize safety measures to mitigate risks associated with AGI deployment.
  3. Third, to foster cooperation and security between nations, ideally helping to avoid international conflicts driven by competition over AGI capabilities. This framework would ideally include clear laws, safety protocols, and democratic oversight to ensure AGI serves societal needs responsibly.

3. What might AGI governance look like in practice?

Seb suggests that AGI governance would involve integrating technology within governmental operations, improving efficiency by automating certain responsibilities and enhancing state capacity.

He emphasized that democracy could also be reimagined, using AI to better synthesize public opinion and streamline constituent feedback, making the democratic process more responsive. In Seb’s view, governments may want to simulate policy outcomes to refine decision-making and explore new voting methodologies, while acknowledging that global AGI governance might require international collaboration and faster, more adaptable processes.

(His emphasis on more robust and participatory governmental processes was a major theme in our pre-interview conversation and during the interview itself – maybe one day these topics will warrant their own interview series about the future of governance as we approach strong AI.)

4. What should innovators and regulators do now?

Seb suggests that for effective AGI governance, it’s crucial to involve experts from diverse fields, like economics, to provide fresh perspectives on AGI’s impact across various domains. He advocates for more exploration of international governance and AI’s potential for public good, coupled with discussions about ideal outcomes and potential risks to guide progress thoughtfully.

He also emphasizes the need for improved methodologies in evaluating AGI capabilities, drawing analogies to randomized control trials in social sciences.

Follow The Trajectory