Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
In business, blindly assuming you’ll be profitable is a recipe for bankruptcy.
In the outdoors, blindly assuming you’ll find water and shelter is a recipe for death.
And yet, many humans (without realizing it) blindly assume that AGI will not disrupt the human condition, and all will somehow be well.
In this article I’ll address this phenomenon of humans leaning thoughtlessly on “plot armor” in order to deny existential risk, which I call:
Pedestal Cope: Manifestations of the unquestioned assumption that modern human beings are the central locus of all value and agency, and that all future posthuman minds will also agree with these unquestioned assumption.
From there, I’ll explain why these assumptions endanger not only human life (by leaving us unprepared for the real challenges of conjuring minds 10000x beyond our own), but the whole chain of posthuman life beyond us.
And we’ll end by discussing the forces that perpetuate these dangerous, incorrect, unquestioned assumptions – and what we can do to more squarely and honestly face the pressing challenges before us at the dawn of AGI.
Most humans don’t have the expressed goal of intentionally blinding themselves to what it really means to create AGI.
Rather, most first world humans simply assume that the future, like the past, will be a kind of ossified version of today’s first world life. Namely:
If people stopped to think about these assumptions deeply, there is almost no way they would hold up.
We have no idea what a mind 10000x beyond our own in power and intelligence (growing and evolving super quickly, constantly) would value.
We have no reason to believe that the equivalent of agency cannot live in AGI or hybrid forms of intelligence, especially when there is ample evidence that it already does have a kind of agency and instinct to self-preserve, and that humans are explicitly building agency into machines already.
We have no reason to suspect that 21st century Western values (with a specific value placed on racial and gender equality, on sexual identity, on individual liberty, etc) will eternally remain the way they are when the world is run by mind-augmented humans or by vastly posthuman AGIs. Values have always changed through necessity, whim, and fashion, and should be expected to continue doing so.
Based on analyzing the activity in my own X feed over the course of 2025, I’ve compiled three main categories and nine main subcategories of pedestal cope that people resort to:
Again, none of these assumptions hold up to much rigorous scrutiny.
And even if it be hypothetically possible that AGI might temporarily abide by some of these blatantly anthropocentric anachronistic assumptions, we cannot reasonably expect and AGI mind that expands new cognitive abilities / physical embodiments / physical senses daily to maintain any one set of guiding beliefs for long (read: Moral Singularity).
Yet, the statements above are not posited as “possible good rolls of the dice we might get with AGI”, they are instead posited as reasons to not think very much about building AGI – to put our head in the sand just assume all will go well.
But putting ones head in the sand is dangerous in business, dangerous when out in nature, and dangerous when you’re conjuring entities 10000x beyond your own power and intelligence.
The stakes for getting AGI right or wrong are as high as stakes could possibly get.
Get it right, and (hopefully) humanity would be treated really well for a very long time, possibly as flesh-and-blood humans but possibly in new kind of mind-uploaded heavens. Also – get it right and life itself blasts into he cosmos, opening up rich and expansive value and power and understanding as far beyond humans as humans are beyond the sea snail – a light cone blazing with value.
With stakes so high, unquestioned assumptions that “all will be well” are not advisable if you want humans or posthuman life to flourish.
But also – the plot armor assumptions of Pedestal Cope do even more harm than increasing dangers, they decrease our ability to think about positive futures that don’t involve creatures-with-opposable-thumbs eternally running the show.
Pedestal cope is dangerous and counter-productive for two crucial reasons:
Assumptions of optimistic anthropocentrism (head-in-sand, “all is well”) prevents us from looking at how radical and dangerous it is to hurl AGI into the world.
All of these scenarios resulted in catastrophe.
It behooves us to be extremely careful about what kinds of minds we create (which will require some kind of international cooperation to birth unworthy successors), and study them extremely closely to ensure that we thing (a) they will treat us well, and (b) they have the morally valuable traits that we cope would continue beyond us.
With unaligned AGI, the impending trends of immersive AI experiences and BCI, and with conflict in Iron, Israel, Ukraine, and impending conflict in Taiwan – there are many waves of destructive and transformative forces crashing up upon mankind today. It is reasonable to believe that we will be destroyed or transformed into something else (AI immersion, brain-machine interfaces, etc) in 25 years or less.
If that is true, then:
We’d better understand and preserve the flame (value) because this specific torch (humanity) is unlikely to hold up for much longer.
We hate that change is constant, but if we are mature we will face it squarely.
Instead of thinking only of impossible “eternal hominid kingdom” futures, we should ask:
“If there is something value about humans (our rich sentient experience, our love, our creative ability to open up new powers through technology and understanding, etc), how can we ensure that those traits and qualities persist and bloom (as humans have bloomed beyond the sea snail) after we are gone?”
But head-in-the-sand anthropocentric inertia prevents this. Plot armor, and a belief that “the future will be just like the present” prevent us from having essential conversations about how to carry value forward.
A variety of factors make pedestal cope endemic:
The belief that humans are the pinnacle of agency and moral value in the world isn’t something that has to be taught directly.
As self-interested creatures we naturally care most about ourselves, and as social creatures we care about the benefits and risks that other humans represent.
Similarly, all of society seems to mirror this back to us. Everyone else also thinks humans are the most agentic and morally relevant entities (pound for pound), and that it’ll always been that way. How could it be otherwise?
Humans are held accountable for their actions. Technology is a tool used by humans. And while people may advocate for animal rights or vegetarianism, no one in their right minds considers the life of a single earthworm or Labrador retriever is worth more than that of a human.
This “invisible anthropocentrism” pervades our lives so deeply, it seems impossible to question. It is like geocentrism – it seems to obviously true, and couldn’t possibly by otherwise (“The sun goes around the earth, of course, and the moon the same… just look!”).
Many science fiction authors are incredibly smart and open-minded, but nonetheless are completely wrapped in invisible anthropocentrism, to an embarrassing degree.
The fiction I most respect nonetheless falls into this same trap.
Eminent sci-fi author David Brin argues openly (including in this interview with me) that AGI will naturally see. I respect Brin tremendously and have benefitted from his ideas (I recommend Disputation Arenas), but nonetheless I disagree strongly with this “very likely parent-child love” thing.
In my opinion these are some of the finest examples of Western fiction – yet all of them present this fairy-tale future where human-like concerns still matter, and where humans themselves are somehow still valued for some ineffable quality. A soothing cope jumps in and makes these visions of “far futures” into lullabies.
Arthroscope and invisible anthropocentrism deeply pervade even the best fiction.
I’ll state it frankly:
Fiction has mostly lulled humanity into believing that, under all circumstances, humanity will be the main character, and an important, eternal measuring stick of moral value.
This overtly leaves us unprepared for the real consequences of conjuring vastly posthuman minds, which is almost certainly our moral irrelevance and swift attenuation.
I can’t blame the authors to keeping their books chock full of invisible anthropocentrism, though.
How could they possibly hope to sell books when the main “characters” are incomprehensibly complex entities operating along strata of nature beyond human imagination, doing things beyond human conception?
How could they possibly hope to sell books if the moral value of humanity was portrayed to be completely insignificant compared to vast, alien, incomprehensible entities?
The authors can’t be faulted for doing their best to feed their families. But it is a shame that the human imagination – even when bathed in some of the best western fiction that exists – remains submerged in invisible anthropocentrism.
It is possible that Gibson, Stross, and Egan have all written at length about non-anthropocentric themes (I’m not a connoisseur of fiction, and haven’t read much). It’s possible that they have written openly about non-anthropocentric futures, about ideas like cosmic alignment and worthy successors, and I simply haven’t run into it. And it’s possible they may harbor much more cosmic moral beliefs themselves, and simply don’t voice them openly (which is pretty understandable when you see how I’m sometimes treated on Twitter).
Staring into the void is hard.
Zapper writes in The Last Messiah:
Man is a paradoxical being: he is equipped with an intelligence that enables him to foresee his own death, yet he is not equipped to handle that foresight emotionally.
Most people simply can’t handle considering their own mortality or that of their loved ones.
For the rare few who do accept their own death, it is common for them to channel (in Zappfe’s terms, Sublimate) their efforts into “making the world a better place”, which could mean contributing to the arts, sciences, or to one’s community, or offspring.
But what happens when people have to go through the Kübler-Ross stages of grief again – not just for one’s own mortality – but for the eventual end of humanity itself?
It’s all too much for people.
They bought that lake house with their hard-earned money knowing that their children, and eventually their grandchildren would enjoy it. 80 years from now, they’d be waterskiing out there, carrying on the legacy.
Destroying that fairytale involves moving such deep foundation stones in one’s psyche, that one prefers to stay in denial. If not about one’s own mortality, definitely about the eventually attenuation of man. The soothing effect of believing that humans will go on as the sole axis of agency and moral value into the future is too comforting to let go of. The AGI and posthumanism trends too disturbing to face squarely and manfully.
Man is made in the image of God.
For dunces, they might believe that God (whichever one they chose – or more likely – whichever one they were arbitrarily born into) will protect them.
For those who like to think of themselves as not being dunces (don’t we all), there are plenty of religious ideas that weave their way into our lives, and soothe us with a sense of the eternal moral value of hairless apes. Here’s a few:
The list is much longer than these two, but I see these two as being common spiritual justifications that are presented flatly as “secular.”
Religion has this lovely blinding effect, a soothing ability to “know” everything will be okay.
“Surely, AGI will know we’re all one. Anything intelligent would surely arrive at the same conclusion that I (a flawed ape with arbitrary values, molded by my anachronistic culture and paltry level of access to reality) have also arrive at.”
These are dangerous things to believe, and they leave us unprepared for the future we’re walking into.
I’ve written at great length about why the leadership and employees at the AGI labs and within big tech are often aware of AGI risk, but are wholly unable to talk about it because (a) doing so would amen them look (maybe rightfully) like a bad guy, and (b) because it would get them fired.
The incentives are set up in such a way as to turn those closest to the tech (and most aware of AGI’s ability to eventually wipe us out) into deniers of AGI risk.
This video explains this phenomenon to the best of my ability:
The first thing we should do to move beyond pedestal cope is to take AGI risk seriously.
We should call our groundless statements overtly expressing “plot armor” for humans, or assuming that all futures obviously involve human agency and moral value for what they are: pedestal cope.
Lets be mindful of invisible anthropocentrism – and encourage us to take the existential risk of AGI seriously – not assuming that “alignment” or “humans will be fine” are defaults – but insisting that we ensure we don’t release an unworthy successor who buffers us out of existence (through indifference, malice, its extended phenotype, etc).
We couldn’t keep a business running by denying the fact that cash flows matter, or that “it’ll all work out.” In business, assuming you have “plot armor” is a surefire way to go bankrupt – but humans instinctively think “it’ll all work out” with AGI.
It should be normal to want to ensure good futures for humans.
It should be seen as frankly stupid to assume that all futures and all future intelligences will treat us well – or that no risks are truly existential. This works against the end of having good outcomes for humans.
The second thing we should do is move towards acceptance of an eventual posthuman transition.
Ultimately, pedestal cope is merely the first barrier in getting humans to accept our eventual attenuation. It is the denial stage of the five stages of posthuman grief:
But in order to move on – in order to move towards a positive, and possible future, we must move beyond denial.
We couldn’t build a new life without a loved one until we have accepted their death, made peace of it, made meaning of it, and found direction and goals that involve living well and doing good in a world without that loved one.
Similarly, we cannot work towards “positive” futures if we insist on pretending that we’ll always be the most morally valuable and high-agency, high-potentia entity in existence.
If there is something about us that is valuable (I argue that consciousness and autopoiesis [expanding potentia] are the core morally valuable traits now known, in humans and other entities), then we must ask how that morally valuable “stuff” persists and expands, even if we are not eternally at the steering wheel of eternity.
In the long-term, all things attenuate or transform:
We are building something (AGI) that likely drives us out of existence, and if it doesn’t, we’ll merge/transform into something beyond what we are now. In either case, something beyond us is emerging, and whether it is a boon to the universe or not seems partially up to us.
Yes, we should aim to get the best future shake we can get as a species, but we must also consider our greatest role: That of catalyzing the expanse of life (in whatever substrate) into he cosmos, of ensuring that the flame of power, value, and experience blazes on mightily beyond our torch.
…
Header image credit: Andersons.com
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…