Nick Bostrom – AGI That Saves Room for Us [The Trajectory Series 2: Worthy Successor Series, Episode 1]

Nick Bostrom – former Founding Director of the Future of Humanity Institute at Oxford – joins this week on The Trajectory. Bostrom has plenty of formal accolades, including being the most cited philosopher under the age of 50 (he is now 51), but beyond all of that – I consider him to be the foremost posthuman thinker now living, and a crucial voice for the Worthy Successor discussion.

It’s been a full ten years since my last interview with Nick, and he didn’t disappoint. The interview is our first installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This series references the article: A Worthy Successor – The Purpose of AGI.

In this episode he shares a somewhat hopeful view of what life might be like for humans (and posthuman intelligences) after the emergence of AGI.

I hope you enjoy this conversation with Nick Bostrom:

Below, we’ll explore the core take-aways from the interview with Nick, including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.

Nick Bostrom’s Worthy Successor List

1. Is a continuation of existing life, not a do-over.

Nick (understandably, as a living thing himself) would prefer that existing living things get the opportunity to thrive along with the superintelligence, rather than being replaced by it. He sees current life as a kind of valuable slice of the current value-space which should be maintained and explored.

In addition to creating various new forms of digital life, he also hopes that humans and some animals get to bloom into postman beings – expanding our powers and our experience beyond our present limitations of thought, senses, action, and understanding. To paraphrase Nick:

It might be seen as a shame that we develop so much from age 5 to 30, and then change so little after that. Perhaps there is a much more full development we can enjoy that we’ve previously not been able to experience.

2. Permits existing humans a choice of their evolution/life.

Nick believes that a Worthy Successor would allow people to have the ability to choose if they want to change or not. He sees a world where some humans prefer to remain in the monkey suit, while others augment themselves, and still others merge with greater intelligence or exist as blissfully expansive mind-uploaded consciousness.

3. It pursues its own grand objectives.

Within some moral conditions – it would pursue its own grand, expansive aims, beyond human goals and imagination.

Just as humans have remarkably more advanced and complex goals to pursue that are beyond the imagination of gerbils, a Worthy Successor might also have goals and objectives wildly beyond those which we humans can imagine. As it would be ridiculous for humans to have their goals limited by the imagination of gerbils, so the goals of an AGI probably should not be hindered by the imagination of hominids.

That said, Nick does stipulate that some moral values should be upheld well beyond the human form – and Nick suspects that there may be a kind of moral bedrock that greater intelligences could tap into that would ensure that things like torture or slavery aren’t ever seen as acceptable.

Nick’s Regulation / Innovation Considerations

1. We should slow down as it becomes more clear that we’re approaching AGI.

A purely arms-race dynamic of international AGI labs would be dangerous, and some coordination seems best, thought Nick doesn’t have a strong preference for which international body handles said coordination.

Nick thinks that a temporary pause of 6-12 months might be right – but he says that this risks a permanent ban on AI, which he would consider a terrible outcome as this would prevent the blooming of greater minds, and would potentially prevent the uplifting of human minds to new heights.

2. We should move towards acceptance of the interests of digital minds.

If machines are conscious, they should have moral consideration legally and culturally. Of course, today we don’t have a very good way to detect sentience, but if methods are machines have an inner experience as we do – Nick thinks that they deserve moral consideration.

3. We should avoid pursuing any one radical governance vision.

Ham-fisting a singular governance approach is likely to miss nuance and do more harm than good. Nick was clear in our interview that some kind of light pause directly before superintelligence is reached (if such a “goldilocks zone” could ever be determined) might be ideal – but that having one human ideology lead the development of AGI is probably a bad idea.

Concluding Notes

I’m grateful to have had Bostrom as the first in this series – it was a pleasure to connect with him after a full decade since our last interview. I was frankly surprised by his anthropomorphic takes on value – and his believe that humanity might reasonably expect to survive (or have our wellbeing or survival prioritized) by vastly posthuman intelligences, which I’ve long considered wildly unlikely.

My intuition tells me that Nick knows full well that the wild fecundity of values and actions and powers and priorities created by a real superintelligence would almost certainly spell either the fast or gradual attenuation of humanity. He accepts this, I suspect. If you’ve read his first book, or any of his papers and essays (I recommend What is a Singleton and Sharing the World with Digital Minds), you know he’s more than aware of the potential uncontrollability of AGI, and the potentially more vast moral claims that such superminds would have over the comparatively meager needs and “values” of hominids.

I suspect that his purpose with this latest book (Deep Utopia) isn’t actually a change of heart around the likelihood of humans surviving (or mattering very much) after AGI’s emergence. Rather, I suspect that his latest book is intended mostly to prevent a Butlerian Jihad-style ban on AGI, and so prevent AGI from emerging.

That said, I won’t put words in Nick’s mouth – these are merely my own speculations. Nick has long ago expressed visions of humans using AGI as carefully constructed extrapolation of human values (an idea that I don’t agree with), and it’s possible that he really does think that human flourishing after AGI is accessible.

What do you think? — Drop your comments on the YouTube video and let me know.

In either case, this conversation was a blast – and I hope you enjoyed it.

Follow the Trajectory