Andrea Miotti – A Human-First AI Future [AGI Governance, Episode 4]

Joining us in our fourth episode of our series AGI Governance on The Trajectory is Andrea Miotti, Founder and Executive Director of ControlAI. He is also one of the main authors of The Narrow Path report.

Discussing a strategy for international coordination to sort of bound the capabilities of artificial general intelligence, we talk a little bit about what it would look like to get that plan in action, with a pretty major focus on what day to day folks can do to sort of potentially influence the trajectory of policy.

Getting great powers to come together for AGI policy is no easy task. Andrea adds a lot of additional detail to that in this episode, and also share some of his perspectives on maybe the ultimate uncontrollability of artificial general intelligence

I hope you enjoy this conversation with Andrea:

Below, I’ll summarize Andrea main points from each of the four sections of our interview.

AGI Governance Q-and-A Summary – Andrea Miotti

1. How important is AGI governance now on a 1-10 scale?

10 out of 10.

Andrea believes that there are only 2 ways to react to an exponential; too early, or too late.

2. What should AGI governance attempt to do?

Andrea emphasizes the importance of maintaining human control and agency, rather than allowing AI to become so powerful that it could override or replace human decision-making.

Preventing the development of superintelligent AI that could defeat humanity should be a priority until we can build the necessary institutions and science to deal with the threat. This involves measures like limiting compute resources and training runs for AI systems

He also believes that we should create a stable international system where countries can trust each other to follow agreed-upon rules and prevent rogue actors from developing superintelligent AI in secret. This requires mutual guarantees and coordination between nations.

3. What might AGI governance look like in practice?

Andre believes that the key is to buy time through strict safety measures, build international cooperation and trust, and then leverage AI to enable human flourishing while maintaining firm human control. This multi-pronged approach is the essence of Andrea’s vision for practical AGI governance.

In the short term, this may look like implementing a cap on the size of AI training runs and the total computing power of individual data centers.This would make it difficult for rogue actors to rapidly develop superintelligent AI systems, as it would take them years to breach the compute limits.

4. What should innovators and regulators do now?

The key is to start building the foundations for international cooperation and normative prohibitions on uncontrolled AGI development, while also raising public awareness and mobilizing key actors to take action. This multi-pronged approach is crucial in the near-term to address the existential risks posed by advanced AI systems

Individuals, including innovators and regulators, should speak up and raise awareness about the risks of uncontrolled AGI development. They should express their concerns to friends, elected officials, and other key stakeholders. This grassroots effort to build public awareness and political will is an important first step.

Innovators and regulators should engage in track two dialogues – informal discussions between countries and experts. The goal is to build towards formal, track one negotiations and international agreements on AGI governance. These track two dialogues can help establish shared understandings and lay the groundwork for coordinated action.

High-net-worth individuals and other influential stakeholders should coordinate with each other to stop the uncontrolled development of potentially dangerous AI systems. This could involve using their influence and resources to push for stronger governance measures.

Follow The Trajectory