The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio.
My first interview with Yoshua was 9 long years ago – when AI risk wasn’t on his radar at all – and frankly seemed out of reach (see my original AI risk poll from 9-10 years ago, including Yoshua’s response).
Since then, he introduced me to some excellent Montreal AI entrepreneurs before my first Montreal AI pilgrimage and media coverage in 2017, and I’ve followed his innovations and developments through the mainstream media and oddly – through Facebook.
The interview is Episode 1 in The Trajectory’s first series AGI Destinations, where we explore futrue scenarios of man and machine – which futures we should move towards, and how.
I hope you enjoy this conversation Yoshua:
In this article I’ll explore Yoshua’s position on the ITPM, and highlight some of the more interesting takeaways from the episode itself.
The entire “AGI Destinations” series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.
Bengio roughly places himself in the early part of B2. He sees gradual, coordinated progress in emerging tech and AI as inevitable and probably a net good – and he does see an eventual transition to post-human life – but it’s not one he thinks we should hurl ourselves into (see the section below about “Preventing and Unworthy Successor”).
He is generally philosophically aligned with open source, but believes that rapid AI development with no coordination or guidelines is almost certainly too dangerous to endure – and that more international alignment is needed (he articulates many of these ideas in the interview above, as well as on his personal website).
I was expecting some politically correct talk about how “machines must serve man eternally”, but I didn’t get that with Yoshua.
His idea roughly boiled down to the following:
For people who think we should race to build AGI in all directions with zero coordination of any kind — have more empathy for beings who are alive now.
For people who think humanity should be at the top of the food chain for eternity — have more empathy for the totality of possible life, and understand our place in a grander picture of nature.
I spoke with Bengio 9 long years ago, and he was part of a previous pole that I ran about AI risk. In that poll he more or less said that AI risk (beyond job loss) wasn’t worth thinking about seriously, even in 100 years or so.
Two things he said stood out:
Across 1100+ interviews with Emerj, very few guests – never mind academics – have frankly spoken to me about “being wrong.” Bengio speaks candidly about the evolution of his thought process and it’s worth listening to.
A number of times during the interview Bengio expresses the idea that “AI on steroids” won’t necessarily lead us to a great blooming of intelligence and sentience into the galaxy.
Bengio is very much of the belief that open source progress in 100 competing directions won’t automatically lead to prosperity/peace for humans (i.e. good outcome for homo sapiens), and a blooming of sparkling intelligence into the galaxy (i.e. good outcome for life itself).
An open-source fan at heart – he goes into a number of his own ideas for coordination mechanisms and governance in the full episode above. Some of his ideas include:
…
I’m grateful to have had Yoshua as episode 1 in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…