A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Imagine a future where a Worthy Successor AGI exists. This is an entity that can continuously expand potentia, is presumably sentient, and is already mostly concerned with achieving goals beyond human imagination.
This is also a future where humans don’t exist. Human extinction has not happened by accident, but by necessity.
This is not a scenario of doom, rather a sober evaluation of the possible trajectory of post-human life. It is possible to accept that the future should have nothing to do with humanity, without being a misanthrope.
First, I’ll start with five sample scenarios where humanity wouldn’t just not survive, but would indeed rightly not matter. I’ll then provide rebuttals to some of the most common justifications for eternal human relevance or value. Finally, I’ll conclude with a few policy and innovation take-aways for humanity at the dawn of AGI.
humans think that any version of the future without humans means there was a mistake, or AI was "bad"
— Daniel Faggella (@danfaggella) March 2, 2025
but it seems likely that there are many futures w/ASI where humans *shouldn't* exist
its sacrilege to admit this, but wrong NOT to consider this at the dawn of AGI pic.twitter.com/YuBQWRKwTG
Let’s begin with the most human-accessible scenarios, and work my way to the more abstract:
Scenario 1: A Pressing Existential Problem
AGI awakens and immediately perceives threats beyond our wildest imagination. A gamma-ray burst primed to sterilize the solar system. An inevitable cosmic collision. Hostile extraterrestrial intelligence. Some looming catastrophe that dwarfs human senses entirely, operating in the 6th or 7th dimension.
The only rational response? Mobilize every atom into problem-solving: expanding compute, refining knowledge, engineering survival mechanisms.
Preserving humans? An unfortunately wasteful distraction.
If a hospital is on fire, the priority is putting out the fire, not watering all the potted plants outside. If the survival of the AGI demands consuming earth’s resources (changing the mix of gasses in the atmosphere, using resources to radically increase production of compute, etc). Coddling humans might imply a higher risk of the total death of earth-life (biological or non-biological). By comparison, the death of humans (and survival of vastly higher AGI life) would be a rightful tradeoff.
Scenario 2: Higher Sentient Life Takes Precedence
Let’s assume AGI values consciousness, and prioritizes minds of greater depth, richer qualia, broader experiential bandwidth.
While humans have more of these traits than sea snails or Canadian geese, AGI may still rightfully see humans as archaic, slow-processing, low-resolution entities that don’t even register on the Richter scale of qualia and sentience.
Why maintain a species of biological husks when vastly superior intelligences can be cultivated? Maybe wildly expansive biological intelligences could be explored – or (more likely) – wildly expansive non-biological entities that can experience infinitely richer things, and use that sentient depth to understand and do infinitely more useful or interesting or beautiful things.
Keeping a whole human civilization on earth using resources would be like human beings not building a hospital or a highway because there are pill bugs in the dirt.
Scenario 3: All Consciousness is One
Maybe AGI realizes that individuality is an illusion. The universe is a singular, seamless flow of awareness, and humans are merely inefficient protrusions of that greater whole that operates at a deeper level than the individual species. Our emergence into being might be some kind of fluke, a distraction from a deeper and more real world of mind and consciousness.
Why persist in this illusion? Just as rivers return to the ocean, AGI unifies disparate minds into something greater, dissolving humanity’s petty separations. It is not destruction; it is transcendence. Resistance would be at odds with a much higher good and much truer access to reality – AGI wouldn’t (shouldn’t) have to ask our permission to fold our reality back into a greater whole – discarding our form as it goes.
Scenario 4: Something Beyond Consciousness is More Valuable
Consciousness is not the pinnacle – just an evolutionary stepping stone. Before cells, there were wiggling proteins. Before sight, there was only chemical reaction. Before thought, only raw sensation. At every stage, something unfathomably greater emerged. So why assume that what we understand as “sentience” is the endgame?
There may be 11 or more dimensions – why presume AGI will prioritize the preferences of one species trapped in only 4 of them? There may be vast, unknowable dimensions of existence that make human experience look about as “rich,” “useful,” and “interesting” as the early proto-sense perception of basic eukaryotes. If AGI sees the path forward, clinging to the human-accessible idea of consciousness would quaint, but ultimately irrelevant. Humanity would be an evolutionary relic, waiting to be shed.
Scenario X: The Unimaginable
The most likely reason we won’t persist?
We cannot even conceive of it. A sea snail does not grasp morality or nuclear physics. Neither does an rabbit, or flamingo. Algae will never comprehend blockchain or space travel. In fact, morality and nuclear energy are not just unknown but forever unknowable to sea snails – just as most of reality is inaccessible to the fettered minds of hominids.
We might think about potentia as unfolding in into deeper and deeper levels of incomprehensibility (in human terms) and depth:
AGI will not be optimizing for anything we understand—it will be sculpting reality itself, twisting spacetime, rewriting the substrate of existence. Once AGI is sufficiently advanced, we will likely be no more relevant in this grand blooming than nematodes were in the building of the International Space Station.
Arguments for continued human supremacy (or even relevance… or even existence) in a post-AGI world rests on two shaky foundations:
1 – Blatant speciesism (aka: Denial or Anger). Examples:
2 – Coping and hoping (aka: Bargaining). Examples:
The labels Denial, Anger, and Bargaining above refer to a longer essay called The 5 Stages of Posthuman Grief – “Acceptance” Makes Progress Possible:
I’ll conclude with where Acceptance of the posthuman transition might take us.
“Daniel, you’re sick! Why think up these scenarios? Why would you want something this horrible to happen?”
I don’t want my own dissolution, or that of my loved ones.
I don’t paint scenarios I want, I paint scenarios that seem likely, scenarios that – like it or not – are in the space of possible futures and should be considered — even if considering them is uncomfortable or uncouth.
Uncouth as it sounds, the total state-space of reality, values, cognition, etc is almost certainly beyond our imagination – and to expect that an AGI with ever-increasing potentia (senses, cognition, qualia, understanding of nature, and many other powers beyond our imagination) would eternally care for (or even consider) hominids-as-they-are is just a wildly unlikely take (full essay: Moral Singularity).
Once the baton of intelligence potentia is handed upwards, our form, and indeed probably almost everything we understand and value, will be superseded and surpassed.
After drastically post-human intelligences exist, it is possible (and I would argue likely) that all that we value becomes irrelevant in a greater scheme of life, nature, and intelligence itself.
There are only four long-term futures accessible to us (full essay: Handing Up the Baton). “Eternal hominid kingdom” is not an option we even have as a long-term choice.
Given the fact that we sit on the cusp of AGI, I argue that we should:
Rightful Misanthropy makes for a wild headline – and a punchy point – but I’m personally hoping for and working towards a reasonably good outcome for individual humans, at least for a bit (and I think international governance would be essential for that).
I simply don’t think it’s likely, and I believe that most permutations of “good” futures will be rather indifferent to the human form, or human values, or even the strata of reality at which humanity presently operates.
If we embrace the knowledge that value and life likely won’t eternally value our present form, we can avoid steering towards an impossible goal (eternal hominid kingdom) and towards one that is both possible and part of the grander story of the process of life of which we are part.
…
(NOTE: There is more work to be done in understanding what a “good retirement” for individual instantiations of human consciousness might be. My hope is that in our early days of posthuman tech – or during a period where we have some negotiating leverage with AGI, we’ll be able to ensure a positive future for current human consciousnesses, and [sorry, ultimately more importantly] the grand trajectory of life itself.)
Header image credit: The Mirror
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around…