A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.It’s a subjective term, varying widely in it’s definition depending on who you ask. But getting someone to define this term tells you a lot about their ideal outcomes, their highest values, and the likely policies they would recommend (or not recommend) for AGI governance. In the rest of the short article below, I’ll draw on ideas from past essays in order to explore why building such an entity is crucial, and how we might know when we have a truly worthy successor. I’ll end with an FAQ based on conversations I’ve had on Twitter.
“Okay, this thing is clearly more capable than us, and will clearly discover more, get more done, and survive better in the cold expansive universe than we humans can.”
After the statement above, it may make sense (depending on where you stand on the ITPM) to say something like:“You know what? I think the AGI’s got it. I think the reigns of the future – the handle of the very torch of life – should be handled by this god-like thing, rather than by man.”
So… what’s on your Worthy Successor List?Some people might say: “Nothing! No thing should ever surpass humanity… we are the eternal pillar of moral value! We should eternally determine the future of life ourselves!” People who say such things are, as far as I can tell, clearly morally wrong on many levels. They are members of the Council of Apes. But you, dear reader, surely you have a list of requirements which – if met – would permit you to let go of the reigns and hand them over to a worthy successor? As a species, I think it makes sense to look frankly at our fleeting position, and decide when and how to pass the baton upwards.A “Worthy Successor list” is a list of capabilities that an AGI could have that would convince you that the AGI (not humanity) should handle the reigns of the future.
What’s on your Worthy Successor list? Here’s a handful of mine: pic.twitter.com/HOvaIvNRUx — Daniel Faggella (@danfaggella) December 22, 2023
1. “Just because it’s smarter than us – it is “worthier” than we are? What about love, humor, creativity… what about all the things we are that a machine could never be?!”Absolutely not. “Smarter” is somewhat vague, and “smart” doesn’t imply more potentia. Potentia (see link to article above) implies a vast array of traits and qualities and capabilities. It implies more of not just intelligence, but more of all the requisite abilities that give a living thing the ability to survive in an uncertain world. Physical powers like speed and strength (presumably an AGI could control billions of robots, space ships, etc – and could devise entirely new modes of transportation, power, communications, etc). Cognitive powers like memory, creativity, etc. Humans like to argue that they have an ineffable essence that no machine could replicate, but (a) it may in fact be quite replicate-able, and (b) there are qualities and traits vastly outside the reaches of humanity which are much more valuable, rich, and (importantly) conducive to continued survival in the state of nature of the universe as we know it.
2. “So you think AGI should just kill all humans? Is that now a GOOD thing?”Obviously I’m not wishing for human torment and destruction. Across a thousand articles and social posts I’ve never expressed that sentiment. For years I’ve been clear about the highest goals we can hope for (as outlined in Hope) –
3. “What about brain-computer interface / nanotech / other technologies?”Years ago I thought that BMI would be important, and nanotech would be important. I read Bostrom and Kurzweil and others – and foresaw a kind of confluence of transhuman technologies all working together to increase intelligence and potentia. Now, I think there is a good shot that AGI by itself – without much wetware or biology innovation, may be what gets us there. Progress in AI has been astronomically faster than progress in neurotech. I interviewed Braingate researchers a decade ago, and AI researchers a decade ago. Only the latter party has made gigantic leaps forward. It’s possible that some BMI work will be forwarded by breakthroughs in AI – and that this will help us close the gap on the nature of intelligence. I suspect some degree of that is likely, but I think the vast bulk of the legwork of posthuman blooming will be done outside of biological substrates.
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…