The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The sixth and final episode of our series AGI Governance on The Trajectory is with Eliezer Yudkowsky, famed AI safety thinker and Co-founder at the Machine Intelligence Research Institute.
Eliezer is one of a small handful of writers who have been thinking about AGI safety for over 20 years. His 2004 paper Coherent Extrapolated Volition was an early clarion call to the fact that systems with vastly greater than human capabilities may – for a variety of reasons – not simply “do what we say.” In recent years he’s been more public about his views, not only on Twitter (where he’s sometimes called YUD) but on various media outlets, including a substantial piece for Time Magazine (Pausing AI Developments Isn’t Enough. We Need to Shut it All Down).
Most of my audience here is familiar with Yudkowsky’s AI risk arguments.
This interview takes a totally different tack than any other YUD interview, for two reasons:
YUD is clear that he isn’t a policy expert and wishes to leave those details to others, but his efforts in laying out his big idea are interesting – and shed a lot of additional nuance onto some potential pathways to potentially preventing AGI catastrophe (thought YUD is certainly no optimist).
I hope you enjoy this unique conversation with Eliezer:
Below, I’ll summarize Eliezer’s main points from each of the four sections of our interview.
10 out of 10.
Eliezer emphasizes that if anyone, anywhere on Earth builds an AGI system that is powerful enough to be lethally dangerous to humanity, it could lead to the extinction of the human race. He believes the risks are so severe that world leaders need to take urgent action to coordinate international treaties and restrictions around AI development..
Eliezer believes that the primary goal of AGI governance should be to find ways to reap the benefits of AI technology while eliminating the existential risks posed by unaligned AGI. He believes this requires unprecedented global coordination and a steadfast commitment to preserving humanity’s future.
Eliezer envisions a symmetrical international treaty or agreement between major world powers like the US, China, UK, etc. The goal would be to establish a coordinated, non-competitive approach to AI development and safety.
The key is establishing an international framework with symmetrical rights and responsibilities to prevent any single actor from gaining a decisive advantage through unilateral AI development. In the full interview he discusses how this structure might adapt as hardware and software advances make AGI accessible with lesser and lesser resources (i.e. it won’t take a hundred acres of datacenters to house AGI as technology advances).
For innovators, Eliezer emphasizes the need to recognize the severe existential risks posed by unaligned AGI development. He sees this as an urgent threat that must be taken seriously. He also believes that people should be willing to accept restrictions and limitations on AI research and development in the interest of safety and alignment, even if this may mean slower progress in some areas.
For regulators, Eliezer thinks world leaders should publicly declare a commitment to preventing human extinction through coordinated action on AI governance. They need to look into initiating negotiations for international treaties and agreements to establish a framework for controlling and monitoring AI development.
He also believes that the key is for both innovators and regulators to approach this challenge with a shared sense of urgency and a willingness to make difficult tradeoffs to preserve humanity’s long-term future. Eliezer sees this as an existential imperative.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…