The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Nick Bostrom – former Founding Director of the Future of Humanity Institute at Oxford – joins this week on The Trajectory. Bostrom has plenty of formal accolades, including being the most cited philosopher under the age of 50 (he is now 51), but beyond all of that – I consider him to be the foremost posthuman thinker now living, and a crucial voice for the Worthy Successor discussion.
It’s been a full ten years since my last interview with Nick, and he didn’t disappoint. The interview is our first installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This series references the article: A Worthy Successor – The Purpose of AGI.
In this episode he shares a somewhat hopeful view of what life might be like for humans (and posthuman intelligences) after the emergence of AGI.
I hope you enjoy this conversation with Nick Bostrom:
Below, we’ll explore the core take-aways from the interview with Nick, including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.
Nick (understandably, as a living thing himself) would prefer that existing living things get the opportunity to thrive along with the superintelligence, rather than being replaced by it. He sees current life as a kind of valuable slice of the current value-space which should be maintained and explored.
In addition to creating various new forms of digital life, he also hopes that humans and some animals get to bloom into postman beings – expanding our powers and our experience beyond our present limitations of thought, senses, action, and understanding. To paraphrase Nick:
It might be seen as a shame that we develop so much from age 5 to 30, and then change so little after that. Perhaps there is a much more full development we can enjoy that we’ve previously not been able to experience.
Nick believes that a Worthy Successor would allow people to have the ability to choose if they want to change or not. He sees a world where some humans prefer to remain in the monkey suit, while others augment themselves, and still others merge with greater intelligence or exist as blissfully expansive mind-uploaded consciousness.
Within some moral conditions – it would pursue its own grand, expansive aims, beyond human goals and imagination.
Just as humans have remarkably more advanced and complex goals to pursue that are beyond the imagination of gerbils, a Worthy Successor might also have goals and objectives wildly beyond those which we humans can imagine. As it would be ridiculous for humans to have their goals limited by the imagination of gerbils, so the goals of an AGI probably should not be hindered by the imagination of hominids.
That said, Nick does stipulate that some moral values should be upheld well beyond the human form – and Nick suspects that there may be a kind of moral bedrock that greater intelligences could tap into that would ensure that things like torture or slavery aren’t ever seen as acceptable.
A purely arms-race dynamic of international AGI labs would be dangerous, and some coordination seems best, thought Nick doesn’t have a strong preference for which international body handles said coordination.
Nick thinks that a temporary pause of 6-12 months might be right – but he says that this risks a permanent ban on AI, which he would consider a terrible outcome as this would prevent the blooming of greater minds, and would potentially prevent the uplifting of human minds to new heights.
If machines are conscious, they should have moral consideration legally and culturally. Of course, today we don’t have a very good way to detect sentience, but if methods are machines have an inner experience as we do – Nick thinks that they deserve moral consideration.
Ham-fisting a singular governance approach is likely to miss nuance and do more harm than good. Nick was clear in our interview that some kind of light pause directly before superintelligence is reached (if such a “goldilocks zone” could ever be determined) might be ideal – but that having one human ideology lead the development of AGI is probably a bad idea.
I’m grateful to have had Bostrom as the first in this series – it was a pleasure to connect with him after a full decade since our last interview. I was frankly surprised by his anthropomorphic takes on value – and his believe that humanity might reasonably expect to survive (or have our wellbeing or survival prioritized) by vastly posthuman intelligences, which I’ve long considered wildly unlikely.
My intuition tells me that Nick knows full well that the wild fecundity of values and actions and powers and priorities created by a real superintelligence would almost certainly spell either the fast or gradual attenuation of humanity. He accepts this, I suspect. If you’ve read his first book, or any of his papers and essays (I recommend What is a Singleton and Sharing the World with Digital Minds), you know he’s more than aware of the potential uncontrollability of AGI, and the potentially more vast moral claims that such superminds would have over the comparatively meager needs and “values” of hominids.
I suspect that his purpose with this latest book (Deep Utopia) isn’t actually a change of heart around the likelihood of humans surviving (or mattering very much) after AGI’s emergence. Rather, I suspect that his latest book is intended mostly to prevent a Butlerian Jihad-style ban on AGI, and so prevent AGI from emerging.
That said, I won’t put words in Nick’s mouth – these are merely my own speculations. Nick has long ago expressed visions of humans using AGI as carefully constructed extrapolation of human values (an idea that I don’t agree with), and it’s possible that he really does think that human flourishing after AGI is accessible.
What do you think? — Drop your comments on the YouTube video and let me know.
In either case, this conversation was a blast – and I hope you enjoyed it.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…