Connor Leahy – Slamming the Breaks on the AGI Arms Race [AGI Governance, Episode 5]

Joining us in our fifth episode of our series AGI Governance on The Trajectory is Connor Leahy, Founder and CEO of Conjecture.

This is the most in-depth take I’ve ever heard on how to influence the public discourse about AGI risk, and how to turn AGI risk from something “sci-fi” to something commonplace in policy circles.

The end of this episode is worth listening to carefully if you want more political leaders to take on AGI risk as a talking point.

On top of all of this, Connor lays out his reasoning for why AGI agency is inevitable, and how governments might step in in the near-term to halt the AGI race dynamics between the major western labs, and between the US and China.

I hope you enjoy this conversation with Connor:

Below, I’ll summarize Connor main points from each of the four sections of our interview.

AGI Governance Q-and-A Summary – Connor Leahy

1. How important is AGI governance now on a 1-10 scale?

10 out of 10.

Connor believes there are few things, if anything, that are of higher priority at this point in time when it comes to addressing AGI governance.

2. What should AGI governance attempt to do?

Connor believes that the primary goal of AGI governance should be to prevent a “game over” scenario where an artificial super intelligence comes into existence that is not extremely well controlled and aligned with human interests. The secondary goal should then be to work towards a good outcome.

He also emphasizes that the focus must be on not getting into the “game over” state, as once that happens, it’s too late.

3. What might AGI governance look like in practice?

Connor sees developer liability as one of the most useful and established forms of common law that could be effective for addressing AI risks. This can be in the form of treating AI risks as an externalities problem, where the companies developing the technology are held liable for potential harms, similar to how polluting companies can be sued for environmental damage.

The core idea is using established legal and regulatory frameworks to incentivize AI developers to prioritize safety and alignment with human interests.

4. What should innovators and regulators do now?

The key is to make AGI risk a common knowledge issue – Connor emphasizes the importance of making AGI risk a topic that is publicly discussed and understood, not just something known by a few experts.

He also emphasizes the need to engage with the institutions of decision-making and reasoning in a civil, non-violent manner. Connor stresses the importance of working through established political and legal processes, rather than resorting to drastic measures.

He wants to shift the question from “when should we intervene?” to “why aren’t we intervening now?” on AGI risks. Connor wants to see a sense of urgency and proactive action, rather than a reactive approach, done through coordinating efforts internationally to address the global nature of AGI development and preventing an arms race

The overall emphasis should be on making AGI risks a common public concern, working through established institutions, and taking proactive steps to address the potential dangers.

Follow The Trajectory