The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Sebastien Krier of Google DeepMind joins is in our first episode in a brand new AGI Governance series on The Trajectory.
Beginning his career studying law at King’s College, Sebastien became the Head of Regulation for the UK’s Office for Artificial Intelligence in 2018. Today he works in policy and strategy development at Google DeepMind, where he has resided for the last two years.
Seb’s approach to monitoring AGI development involves a focus on capabilities and risks – as well as rethinking participatory governmental processes to be more nimble and representative. Seb sees society’s infrastructure writ large (rather than just policymakers) as being involved in the changes ahead, and shares ideas of what a more nimble process could look like.
Along the way, we unpack his desired future scenarios of man and machine – and he and I hash out his thoughts about whether AI will empower our will and agency, or diminish it. Seb is my favorite policy thinker within any of the major AGI labs, and I’m grateful he could join us as guest #1 in this series.
I hope you enjoy this conversation with Sebastien:
Below, I’ll summarize Seb’s main points from each of the four sections of our interview.
5.5 out of 10.
Seb believes that it’s important to monitor, research, and hold high-level discussions on governance, but he’s cautious about rushing into new global institutions or mandates right now. His stance reflects a balanced approach, emphasizing the need to understand potential risks and capabilities incrementally rather than enforcing rigid policies prematurely.
This incremental approach seems largely to be the stance of most of the large AGI labs.
Sebastien sees AGI governance as having three primary aims:
Seb suggests that AGI governance would involve integrating technology within governmental operations, improving efficiency by automating certain responsibilities and enhancing state capacity.
He emphasized that democracy could also be reimagined, using AI to better synthesize public opinion and streamline constituent feedback, making the democratic process more responsive. In Seb’s view, governments may want to simulate policy outcomes to refine decision-making and explore new voting methodologies, while acknowledging that global AGI governance might require international collaboration and faster, more adaptable processes.
(His emphasis on more robust and participatory governmental processes was a major theme in our pre-interview conversation and during the interview itself – maybe one day these topics will warrant their own interview series about the future of governance as we approach strong AI.)
Seb suggests that for effective AGI governance, it’s crucial to involve experts from diverse fields, like economics, to provide fresh perspectives on AGI’s impact across various domains. He advocates for more exploration of international governance and AI’s potential for public good, coupled with discussions about ideal outcomes and potential risks to guide progress thoughtfully.
He also emphasizes the need for improved methodologies in evaluating AGI capabilities, drawing analogies to randomized control trials in social sciences.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…