Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
At some point, AGI or non-biological intelligence will be accomplishing most of what there is to be done in the universe, and humanity won’t be contributing meaningfully to almost anything (discovery of sciences, exploration of consciousness, etc).
In nature, things seem to grow and contribute, or they die. In nature, many deaths are unpleasant (being eaten alive while kicking and screaming is pretty common).
But as humans, we might squarely look at our impending obsolescence, work hard to be useful while we can – and also do our best to design a way to have a happy or desirable future, even if not a particularly useful one.
At the dawn of AGI, we should be asking:
What would be the best Ultimate Retirement for obsolete humans?
I’d argue that the best-case scenario here is not living as we do now (bouncing between happiness and sadness, myriad pains, physical diseases, heartbreak, etc), but having our minds uploaded into a kind of personalize hyper-expansive and hyper-blissful virtual world, full of rich sentience experience and exploration a million times beyond that which the human mind can conceive.
Such an uploaded reality might seem like 3 billion earth-years (as subjective experience might be nearly disconnected from clock time), but would in fact be only a few minutes of compute time.
So, each human being (or, more accurately, each currently extant instance of human consciousness) could hypothetically get such a blissful expansive experience for a few minutes – and then fade away. An Ultimate Retirement vastly preferable to a normal 85 years of (comparatively) mediocre pain and pleasure mixed together in a usual human life.
I’ve explored this idea further in Sugar Cubes, The Epitome of Freedom, and As Much as We Can Hope For.
If you have a different idea about what Ultimate Retirement options humans might have when they have nothing to contribute in any meaningful way – feel free to suggest your ideas. At present, this uploaded is the best one I have.
Any Ultimate Retirement implies a kind of charity from beings who are much more intelligent, powerful, and useful than we.
There are many reasons to believe that AGI would not deign to grant us the charity of “keeping earth for humans” or even the more modest request of “giving us all some time in a blissful upload before we dissolve.”
Even more uncouth (but necessary to consider if we wish to honestly look at the future), there are many valid reasons to suspect that it would in fact be wrong for AGI to prioritize arbitrary human benefit when there may be vastly more useful or more “good” aims to pursue.
Once AGI is astronomically more powerful that we are – it seems likely that we’ll be negotiating from a position of not only weakness, but ignorance (just as if a chimpanzee was trying to negotiate with a human).
I would also argue that building an AGI that can keep the flame of life alive is more important than preserving our hominid torch:
I still believe this is true, and that from a moral standpoint, building a Worthy Successor is, long-term, a vastly more important job than coddling or preserving relatively obsolete humans forever. But we might as well shoot for both.
It is worth defining what we’d like in terms of an Ultimate Retirement, and seeing if, as AGI is being built, we can give ourselves the best odds of the most desirable (even if no longer a useful) life.
If there is a shot of achieving such an outcome – we’d certainly have to conceive of and move towards it as early in the process as we can.
.
(Note: There are arguments that humans would never be in a position to retire, because we will remain the get agents of volition and change in the world – either because (a) AGI will always be inert and unable to conceive of goals like humans can, and/or because (b) we would augment our minds enough to keep up with AGI and remain relevant in an overall “ecosystem” of minds. I suspect neither of these scenarios are particularly likely – and that accepting some point of irrelevance and attenuation for hominids is almost certainly best.)
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…