A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Assuming AGI is achievable (and many, many of its former detractors believe it is) – what should be its purpose?
I argue that the great (and ultimately, only) moral aim of AGI should be the creation of Worthy Successor – an entity with more capability, intelligence, ability to survive and (subsequently) moral value than all of humanity.
We might define the term this way:
Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.
It’s a subjective term, varying widely in it’s definition depending on who you ask. But getting someone to define this term tells you a lot about their ideal outcomes, their highest values, and the likely policies they would recommend (or not recommend) for AGI governance.
In the rest of the short article below, I’ll draw on ideas from past essays in order to explore why building such an entity is crucial, and how we might know when we have a truly worthy successor. I’ll end with an FAQ based on conversations I’ve had on Twitter.
An AI capable of being a successor to humanity would have to – at minimum – be more generally capable and powerful than humanity. But an entity with great power and completely arbitrary goals could end sentient life (a la Bostrom’s Paperclip Maximizer) and prevent the blossoming of more complexity and life.
An entity with posthuman powers who also treats humanity well (i.e. a Great Babysitter) is a better outcome from an anthropocentric perspective, but it’s still a fettered objective for the long-term.
An ideal successor would not only treat humanity well (though it’s tremendously unlikely that such benevolent treatment from AI could be guaranteed for long), but would – more importantly – continue to bloom life and potentia into the universe in more varied and capable forms.
We might imagine the range of worthy and unworthy successors this way:
Here’s the two top reasons for creating a worthy successor – as listed in the essay Potentia:
Unless you claim your highest value to be “homo sapiens as they are,” essentially any set of moral value would dictate that – if it were possible – a worthy successor should be created. Here’s the argument from Good Monster:
Basically, if you want to maximize conscious happiness, or ensure the most flourishing earth ecosystem of life, or discover the secrets of nature and physics… or whatever else you lofty and greatest moral aim might be – there is a hypothetical AGI that could do that job better than humanity.
I dislike the “good monster” argument compared to the “potentia” argument – but both suffice for our purposes here.
A “Worthy Successor List” is a list of capabilities that an AGI could have that would convince you that the AGI (not humanity) should handle the reigns of the future.
Here’s a handful of the items on my list:
After a certain number of these miracle events, it might make sense to say:
“Okay, this thing is clearly more capable than us, and will clearly discover more, get more done, and survive better in the cold expansive universe than we humans can.”
After the statement above, it may make sense (depending on where you stand on the ITPM) to say something like:
“You know what? I think the AGI’s got it. I think the reigns of the future – the handle of the very torch of life – should be handled by this god-like thing, rather than by man.”
So… what’s on your Worthy Successor List?
A “Worthy Successor list” is a list of capabilities that an AGI could have that would convince you that the AGI (not humanity) should handle the reigns of the future.
What’s on your Worthy Successor list?
Here’s a handful of mine: pic.twitter.com/HOvaIvNRUx
— Daniel Faggella (@danfaggella) December 22, 2023
Some people might say: “Nothing! No thing should ever surpass humanity… we are the eternal pillar of moral value! We should eternally determine the future of life ourselves!”
People who say such things are, as far as I can tell, clearly morally wrong on many levels. They are members of the Council of Apes.
But you, dear reader, surely you have a list of requirements which – if met – would permit you to let go of the reigns and hand them over to a worthy successor?
As a species, I think it makes sense to look frankly at our fleeting position, and decide when and how to pass the baton upwards.
An unworthy successor would imply an AI that not only disregards the lives of humans (probably implying our deaths), but also squashes out consciousness or life itself, and maybe even does so with lots of suffering. This is literally the worst thing we could imagine.
On the contrary, a Worthy Successor would be the best thing we can imagine. But different people imagine different things – and this is why hashing out this challenging posthuman discourse is important – these ideas need to get on the table.
If humanity must have a successor (and it must, we can’t expect the nature to stand still), then we should create a Worthy Successor. We ought not conjure such a being recklessly. We should define the traits of such an entity, how those traits could be tested for – and then we should innovate and coordinate as best we can in that direction.
An frenzied AI arms race (between labs or countries) is unlikely to get us there. I’ve written at greater length on what global coordination around AGI destinations might look like (essays: Unite or Fight for Global AGI Governance, United Nations SDGs of Strong AI), but I suspect the best coordination ideas are yet to bubble up, and will almost certainly come from smarter persons than myself.
1. “Just because it’s smarter than us – it is “worthier” than we are? What about love, humor, creativity… what about all the things we are that a machine could never be?!”
Absolutely not.
“Smarter” is somewhat vague, and “smart” doesn’t imply more potentia. Potentia (see link to article above) implies a vast array of traits and qualities and capabilities. It implies more of not just intelligence, but more of all the requisite abilities that give a living thing the ability to survive in an uncertain world.
Physical powers like speed and strength (presumably an AGI could control billions of robots, space ships, etc – and could devise entirely new modes of transportation, power, communications, etc).
Cognitive powers like memory, creativity, etc.
Humans like to argue that they have an ineffable essence that no machine could replicate, but (a) it may in fact be quite replicate-able, and (b) there are qualities and traits vastly outside the reaches of humanity which are much more valuable, rich, and (importantly) conducive to continued survival in the state of nature of the universe as we know it.
2. “So you think AGI should just kill all humans? Is that now a GOOD thing?”
Obviously I’m not wishing for human torment and destruction. Across a thousand articles and social posts I’ve never expressed that sentiment.
For years I’ve been clear about the highest goals we can hope for (as outlined in Hope) –
My hope is that individual instantiations of hominid (and maybe other species) sentience might be given the loveliest exit conceivable.
All that said, if one had to choose between (a) hominids being happy, or continuing to persist, and (b) life and potentia itself blooming into the galaxy to discover the good itself and keep the torch of life alive… one should really choose (b). My hope is that (a) is also viable.
3. “What about brain-computer interface / nanotech / other technologies?”
Years ago I thought that BMI would be important, and nanotech would be important. I read Bostrom and Kurzweil and others – and foresaw a kind of confluence of transhuman technologies all working together to increase intelligence and potentia.
Now, I think there is a good shot that AGI by itself – without much wetware or biology innovation, may be what gets us there. Progress in AI has been astronomically faster than progress in neurotech. I interviewed Braingate researchers a decade ago, and AI researchers a decade ago. Only the latter party has made gigantic leaps forward.
It’s possible that some BMI work will be forwarded by breakthroughs in AI – and that this will help us close the gap on the nature of intelligence. I suspect some degree of that is likely, but I think the vast bulk of the legwork of posthuman blooming will be done outside of biological substrates.
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…