Stephen Ibaraki – The Beginning of AGI Global Coordination [AGI Governance, Episode 3]

Joining us in our third episode of our series AGI Governance on The Trajectory is Stephen Ibaraki, Founder of the UN ITU AI for Good, and Chairman at REDDS Capital.

Drawing on his many years of involvement with ACM, IEEE, and other technical associations, Stephen makes a compelling case for (a) how technical standards and ethics work has already set a good foundation for AI development, and (b) how technical standards (combined with other kinds of governance) might help to coordinate Great Powers to bring about AGI safely.

While too often “AGI governance” is strawmanned as a necessarily a kind of one-world totalitarian government, Stephen clearly lays out other alternative paths (with existing and historical precedents) for helping to align nations and work towards better shared outcomes – there’s a lot of nuance in this episode that most of the AGI governance dialogue misses.

Stephen is one of the very rare people in the intergovernmental / policy space that has a strong grasp of brain-computer interface and other pathways to posthuman intelligence, and he unpacks his thoughts on how AGI and BCI might develop and be managed in parallel.

I hope you enjoy this conversation with Stephen:

Below, I’ll summarize Stephen’s main points from each of the four sections of our interview.

AGI Governance Q-and-A Summary – Stephen Ibaraki

1. How important is AGI governance now on a 1-10 scale?

9 out of 10.

Stephen Ibaraki estimates we are only 6-7 years away from achieving AGI, which means risks such as the potential for AGI to develop goals and agency beyond human control are coming nearer and nearer, potentially leading to catastrophic outcomes like the extinction of humanity.

He highlighted the need for global coordination and governance to prevent an “arms race” dynamic between nations or companies developing AGI.

2. What should AGI governance attempt to do?

Stephen believes that we should establish common principles, standards, and frameworks for the safe and responsible development of AGI. This could draw on the work of organizations like the ACM, IEEE, and UNESCO.

Developing operational tools, benchmarks, and processes to monitor the progress of AGI development can help detect potential problematic trajectories early on. Stephen believes we need to ensure transparency and collaboration between different stakeholders – including governments, corporations, scientific/technical organizations, and the public – to align incentives and prevent an uncontrolled “arms race” dynamic.

Fostering international cooperation and coordination to establish a global framework should be a priority, rather than having fragmented or conflicting national approaches.

3. What might AGI governance look like in practice?

Stephen proposed a multi-stakeholder collaboration: The governance approach would involve bringing together diverse stakeholders – including governments, corporations, academia, and civil society groups. This multi-stakeholder model is seen as crucial to align incentives and ensure a balanced approach.

The overall aim would be to create a global, collaborative framework to steer the development of AGI in a positive direction, drawing on the expertise and participation of diverse stakeholders. Pragmatic, operational tools would be a key part of this approach.The overall aim would be to create a global, collaborative framework to steer the development of AGI in a positive direction, drawing on the expertise and participation of diverse stakeholders. Pragmatic, operational tools would be a key part of this approach.

4. What should innovators and regulators do now?

The key is for both innovators and regulators to proactively collaborate and build upon existing technical work to shape the governance of transformative AI capabilities in a responsible manner. This will require an adaptive, multi-stakeholder approach.

For Innovators:

  1. Proactively engage with technical and scientific organizations like the ACM, IEEE, and IFIP to help shape ethical principles and standards for AGI development.
  2. Collaborate with other companies and industry groups (e.g. WITSA) to develop shared best practices and messaging around responsible AI innovation.
  3. Invest in developing operational tools and benchmarks to monitor the safety and progress of AGI systems.

For Regulators:

  1. Closely follow the work of technical organizations and leverage their expertise in developing regulatory approaches.
  2. Facilitate international coordination and the establishment of universal principles and standards, rather than fragmented national policies.
  3. Provide resources and support to empower the scientific and engineering communities to further develop the governance frameworks.

Follow The Trajectory