Nick Bostram on Taking the Future of Humanity Seriously
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
In our first-ever episode of The Trajectory podcast, Yoshua Bengio shared his ideas in our “AGI Destinations” series (playlist here).
At the very end of that interview, I ask him to respond to two groups of people:
His response was probably the most reasonable, non-dogmatic response I’ve had on the show even until the time of this writing some two years later.
I refer to this level-headed response as “The Bengio Take” on the trajectory of intelligence and life:
Dan Faggella, question 1: (Paraphrased) “There are some people who advocate that we need to blast off to AGI as soon as we can, to help humans and to explore the reaches of intelligence. What would you say to those people?”
Bengio response 1:
Compassion. There are people suffering right now. There are harms that are happening right now that are caused by AI. They’re like eight billions beautiful human beings out there. They exist now.
.
I think that there’s a human propensity to care for each other that at least I would find difficult to ignore. And so although I’m not against the idea of exploring something better than humans, we have to do it in a way that is considerate to all the beauty that currently exists and all the pain that currently exists and that we have to take care of.
Dan Faggella, question 2: (Paraphrased) “There are some people who advocate that no intelligence should ever go beyond human level, and that doing so would be wrong. What would you say to that crowd?”
Bengio response 2:
Well, I think we have to have an open mind. It’s just like well, philosophy teachers and science teachers that first of all, we need to open our mind and our hearts to other living beings, intelligent or less intelligent, that exist right now.
.
Second, you might have some more respect for the possibility that other intelligent beings could arise, and we just need to be like doing I think we do need to protect humanity. We do need to try to remain safe, but we also need to consider the possibilities that exist, and it’s okay if it takes time, it’s we need to take the time that we need for understanding and taking the right decisions.
.
But humans are not the end all. We are part of a bigger story that is unfolding, and even currently there, I think, lots of beauty in other species that we, you know, we don’t know what it is to feel like a bat, for example, or a dolphin, and so I think we have respect for that.
This is, as far as I can tell, exactly how we should be thinking about the grand trajectory of intelligence.
There may be worthy successors, but we shouldn’t rush to the first thing that seems “super smart” and assume that that thing is sentient, or that it would bloom complexity and value and beauty into the world as nature has done (Toby Ord and others who have been on the podcast echo the same sentiment).
The stakes are too high to say “all successors are worthy,” and we need to make sure we protect the great flame of life itself (which Bengio reveres, and rightfully sees humanity as simply part of) from a superintelligence that might extinguish it and never replace the sentient splendor and richness of potentia that biological life took so long to build. We need also to study what worthiness is before we presume AGI has it. He asks us to be sure that the flame isn’t extinguished – so that we might allow it to blaze beautifully into forms and beings unknown (as man was unknown to the nematode or sea snail).
If we followed Bengio’s level-headed thinking here, we would:
I hope we do a lot of both, frankly.
…
Header image credit: nouvelles.umontreal.ca/
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around…
I’ve been cajoled into watching Netflix’s Black Mirror, and a friend of mine recommended watching the San Junipero episode next. As I mentioned in my last Black Morror reflection, and I…