Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories:
Given the rapid and drastic changes humanity is about to undergo (and is currently undergoing), from the increased digitization of our experience, and the advancements of artificial intelligence, it behooves us to think seriously about where we want to be as a species in a handful of decades.
Many leading politicians, AI researchers, and AGI lab leaders are congenial to the idea of some degree of global governance around AGI.
But what destination do we want to arrive at in 20 or 200 years?
Ask that question to adult humans today and you’ll get just as many impossible and detrimental answers as you would with 7-year-olds.
Given how powerful these technologies are – we should be able to look frankly at the impending changes that AGI and emerging technologies will imply – and pick among those that are viable – instead of putting our head in the sand and wishing for comforting and impossible, or detrimental futures.
Here are two of the most commonly expressed impossible or detrimental outcomes for humanity:
1. Eternal Hominid Kingdom. Humans, or slightly upgraded transhumans, rule a handful of planets and space colonies, possibly maintaining some kind of global alliance.
2. Gradual Biological Evolution from Homo Sapiens. Humans continue evolving biologically without any technological integrations. Biology continues to do what biology does. For millions of years tech remains merely a tool.
I’d argue that both of the above scenarios are unrealistic and ridiculous because:
There may be strong arguments as to why humanity should hold off on building AGI in the immediate future, and there are probably many reasons why we should hold off on reckless brain-computer interface adoption without more careful testing. But “hold off on these developments forever,” isn’t on the table.
There is too much benefit, too many balls rolling forward already, and too many reasons why human nature will leverage these technologies.
The futures that are viable imply changes that are unfortunately uncomfortable for most people to grapple with. Any sufficiently long-term positive future eventually implies the attenuation of humanity. Maybe it takes 1000 years. Maybe it takes 100. Maybe 10.
But the “eternal hominid kingdom” future is outlandishly unlikely, and probably not morally best.
In the 5 Stages of Grief terminology, we must move from Denial to Acceptance. The majority unrealistic end-games spring from Denial and Bargaining:
In the long-run, there are only four futures we can reasonably hope to move towards – and only the last one could be argued to be beneficial to life.
Two options to attenuate:
1. Extinction from non-AGI-related causes
2. Extinction from AGI or AGI-related causes
Two options to transform:
3. Unworthy successor
4. Worthy successor
I suspect that only the Worthy Successor outcome is a beneficial outcome – and that it is a moral imperative for us to eventually reach such a successor (here’s the full argument as to why I believe this).
Unfortunately, a Worthy Successor is not guaranteed to respect or take care of humanity – and there are many smart thinkers who suspect that it’s very unlikely for us to control AGI either now, or in the future.
We can’t “freeze time” and stop the forces of change (climate change, technological development, demographic collapse).
Year by year (or at the very slowest, generation by generation), our experience will become more and more virtual, and our brain-augmented humans and our strong AI will become more and more powerful.
We also can’t ensure that artificial general intelligence will maintain human survival or wellbeing – or even consider us at all in its plans and aims.
I therefore posit the following concluding recommendations around policy discussion regarding the future of AGI and brain-computer interface:
1. Accept Inevitable Posthumanism: We should accept that creating an entity with the capability for higher goals and actions than humanity cannot be guaranteed to value humanity – but also accept that an eventual post-human trajectory is inevitable.
Policy discussions that assume an eternal hominid kingdom (a world where AI is merely a tool, forever) are fantasy, and ignore the inevitable forces that will change human nature, and the nature of the posthuman intelligences that are bubbling up through humanity and coming into being now.
This doesn’t mean we should rush to post humanism, or that there shouldn’t be guidelines put in place. There should be. But this will require point “2” below.
2. Coordinate Internationally Around AGI Trajectories: The current state of affairs (as of the time of this writing, May 2024) is one of no international AGI governance at all.
Because so many massive waves of technological change are plowing forward at the same time, there is an outright AGI arms race between the major AGI labs (Meta, OpenAI, etc) and the great powers (USA and China). This dangerous condition of “if I don’t build it, they will, so I must build it even though I know it’s dangerous” is due to the fact that there are no international rules for what futures humanity should or should not more towards regarding posthuman intelligence (I argue that such coordination is the only alternative to an arms race).
So if you want AGI progress slowed – either because you want to chase the “eternal hominid kingdom” dream, or simply because you don’t want a military AI arms race armageddon in the coming decade – you should be advocating for serious global governance now.
This might – as I lay out briefly in my longer Emerj article The SDGs of Strong AI:
3. Explore Options for Preserving / Extending Human Consciousnesses: Because posthuman intelligences are somewhat inevitably destined to rule the future, and because we cannot ensure that such intelligences with treat humanity well, we should consider ways to extend the survival and ensure the wellbeing of current human consciousnesses.
It may be viable for humanity to “merge” with technology via brain-computer interface as part of the path to a worthy successor:
It may also be possible to upload human minds (Kurzweil’s Ship of Theseus approach is a useful thought experiment here) into non-biological substrates to simulate a trillion years of expansive blissful experience in what might only be an actual earth-hour (see Sugar Cubes).
This way, even if AGI decides to do something more useful with the compute resources that is housing those human consciousnesses, a single hour would suffice to give each human consciousness more blissful experience than they could ever have in a billion lifetimes (not a bad way to bow out). This kind of uploading might also help to train the AGI and develop into a more capable and better worthy successor.
In order to think critically about the long-term direction we want humanity to take – and consider the best and most viable paths for innovation and regulation – we must begin by letting go of childish or impossible ideas.
We could see this time as tragic, as it will almost certainly be the J-curve of technology that ends the reign of unaugmented homo sapiens as the most powerful species on the planet.
There are great risks before us – but also great opportunities than life has ever before known. Bargaining and Depression don’t get us far, but Acceptance does.
The earth now is burning much brighter with the torch of life than it was 10B years ago. There are more species, and more creatures with larger brains and greater sentient ranges, capable of more ideas, goals, achievements, and mastery of nature.
The goal is to walk this path well enough to ensure that 10B years from now, the universe itself is still lit — and more lit up with the torch of consciousness, life, potentia.
That intelligence beyond us can discover what this universe is all about – and can continue to live and create and discover in ways that are as unimaginable to us as our activities are to earthworms.
The baton has been passed up to us.
And while we cannot grip the baton forever – we can pass it up again.
This time – maybe – deliberately.
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…