Eliezer Yudkowsky – Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Intelligence Research Institute.

Eliezer is one of a small handful of writers who have been thinking about AGI safety for over 20 years. His 2004 paper Coherent Extrapolated Volition was an early clarion call to the fact that systems with vastly greater than human capabilities may – for a variety of reasons – not simply “do what we say.” In recent years he’s been more public about his views, not only on Twitter (where he’s sometimes called YUD) but on various media outlets, including a substantial piece for Time Magazine (Pausing AI Developments Isn’t Enough. We Need to Shut it All Down).

Most of my audience here is familiar with Yudkowsky’s AI risk arguments.

This interview takes a totally different tack than any other YUD interview, for two reasons:

  • Governance in Practice: Eliezer lays out his idea of a set of ideal or near-ideal phasic approaches for AGI governance, including some particular stipulations about what kinds of AI we could benefit from while AGI itself is being prevented.
  • What if it Goes Right: We get to Eliezer’s near and long-term ideal future, which involves an interesting mix of human values and galaxy-populating post-human intelligences who might proliferate and maintain them.

YUD is clear that he isn’t a policy expert and wishes to leave those details to others, but his efforts in laying out his big idea are interesting – and shed a lot of additional nuance onto some potential pathways to potentially preventing AGI catastrophe (thought YUD is certainly no optimist).

I hope you enjoy this unique conversation with Eliezer:

Below, I’ll summarize Eliezer’s main points from each of the four sections of our interview.

AGI Governance Q-and-A Summary – Eliezer Yudkowsky

1. How important is AGI governance now on a 1-10 scale?

10 out of 10.

Eliezer emphasizes that if anyone, anywhere on Earth builds an AGI system that is powerful enough to be lethally dangerous to humanity, it could lead to the extinction of the human race. He believes the risks are so severe that world leaders need to take urgent action to coordinate international treaties and restrictions around AI development..

2. What should AGI governance attempt to do?

Eliezer believes that the primary goal of AGI governance should be to find ways to reap the benefits of AI technology while eliminating the existential risks posed by unaligned AGI. He believes this requires unprecedented global coordination and a steadfast commitment to preserving humanity’s future.

3. What might AGI governance look like in practice?

Eliezer envisions a symmetrical international treaty or agreement between major world powers like the US, China, UK, etc. The goal would be to establish a coordinated, non-competitive approach to AI development and safety.

The key is establishing an international framework with symmetrical rights and responsibilities to prevent any single actor from gaining a decisive advantage through unilateral AI development. In the full interview he discusses how this structure might adapt as hardware and software advances make AGI accessible with lesser and lesser resources (i.e. it won’t take a hundred acres of datacenters to house AGI as technology advances).

4. What should innovators and regulators do now?

For innovators, Eliezer emphasizes the need to recognize the severe existential risks posed by unaligned AGI development. He sees this as an urgent threat that must be taken seriously. He also believes that people should be willing to accept restrictions and limitations on AI research and development in the interest of safety and alignment, even if this may mean slower progress in some areas.

For regulators, Eliezer thinks world leaders should publicly declare a commitment to preventing human extinction through coordinated action on AI governance. They need to look into initiating negotiations for international treaties and agreements to establish a framework for controlling and monitoring AI development.

He also believes that the key is for both innovators and regulators to approach this challenge with a shared sense of urgency and a willingness to make difficult tradeoffs to preserve humanity’s long-term future. Eliezer sees this as an existential imperative.

Follow The Trajectory