Yoshua Bengio – Why We Shouldn’t Blast Off to AGI Just Yet [The Trajectory Series 1: AGI Destinations, Episode 1]

Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio.

My first interview with Yoshua was 9 long years ago – when AI risk wasn’t on his radar at all – and frankly seemed out of reach (see my original AI risk poll from 9-10 years ago, including Yoshua’s response).

Since then, he introduced me to some excellent Montreal AI entrepreneurs before my first Montreal AI pilgrimage and media coverage in 2017, and I’ve followed his innovations and developments through the mainstream media and oddly – through Facebook.

The interview is Episode 1 in The Trajectory’s first series AGI Destinations, where we explore futrue scenarios of man and machine – which futures we should move towards, and how.

I hope you enjoy this conversation Yoshua:

In this article I’ll explore Yoshua’s position on the ITPM, and highlight some of the more interesting takeaways from the episode itself.

YoshuaBengio on the Intelligence Trajectory Political Matrix

The entire “AGI Destinations” series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.

Bengio roughly places himself in the early part of B2. He sees gradual, coordinated progress in emerging tech and AI as inevitable and probably a net good – and he does see an eventual transition to post-human life – but it’s not one he thinks we should hurl ourselves into (see the section below about “Preventing and Unworthy Successor”).

He is generally philosophically aligned with open source, but believes that rapid AI development with no coordination or guidelines is almost certainly too dangerous to endure – and that more international alignment is needed (he articulates many of these ideas in the interview above, as well as on his personal website).

Interview Takeaways

1 – Valuing Posthuman Life

I was expecting some politically correct talk about how “machines must serve man eternally”, but I didn’t get that with Yoshua.

His idea roughly boiled down to the following:

For people who think we should race to build AGI in all directions with zero coordination of any kind — have more empathy for beings who are alive now.

For people who think humanity should be at the top of the food chain for eternity — have more empathy for the totality of possible life, and understand our place in a grander picture of nature.

2 – What Changed His Mind on AGI

I spoke with Bengio 9 long years ago, and he was part of a previous pole that I ran about AI risk. In that poll he more or less said that AI risk (beyond job loss) wasn’t worth thinking about seriously, even in 100 years or so.

Two things he said stood out:

  • “Daniel, the things we were building were so stupid!” He was so close to the tech, looking so closely at it’s moving parts, that it’s flaws and limitations were so clear – thinking of it overtaking humanity didn’t seem viable.
  • “I didn’t want to believe this about my life’s work” In the interview Bengio explores his gradual process of overcoming cognitive dissonance, and how he eventually came to an understanding that AGI might not only be possible but close.

Across 1100+ interviews with Emerj, very few guests – never mind academics – have frankly spoken to me about “being wrong.” Bengio speaks candidly about the evolution of his thought process and it’s worth listening to.

3 – Preventing an Unworthy Successor

A number of times during the interview Bengio expresses the idea that “AI on steroids” won’t necessarily lead us to a great blooming of intelligence and sentience into the galaxy.

Bengio is very much of the belief that open source progress in 100 competing directions won’t automatically lead to prosperity/peace for humans (i.e. good outcome for homo sapiens), and a blooming of sparkling intelligence into the galaxy (i.e. good outcome for life itself).

An open-source fan at heart – he goes into a number of his own ideas for coordination mechanisms and governance in the full episode above. Some of his ideas include:

  • Global alignment (through the UN or other bodies) around preferable / non-preferable futures
  • Global alignment around AI research laboratories (listen to the back 1/3 of the interview for Yoshua’s more complete ideas here)

I’m grateful to have had Yoshua as episode 1 in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.

Follow The Trajectory