Rightful Misanthropy – Post-AGI Futures Where Humanity Should Not Exist

Imagine a future where a Worthy Successor AGI exists. This is an entity that can continuously expand potentia, is presumably sentient, and is already mostly concerned with achieving goals beyond human imagination.

This is also a future where humans don’t exist. Human extinction has not happened by accident, but by necessity

This is not a scenario of doom, rather a sober evaluation of the possible trajectory of post-human life. It is possible to accept that the future should have nothing to do with humanity, without being a misanthrope. 

First, I’ll start with five sample scenarios where humanity wouldn’t just not survive, but would indeed rightly not matter. I’ll then provide rebuttals to some of the most common justifications for eternal human relevance or value. Finally, I’ll conclude with a few policy and innovation take-aways for humanity at the dawn of AGI.

5 Post-AGI Scenarios Where Humanity Shouldn’t Exist

Let’s begin with the most human-accessible scenarios, and work my way to the more abstract:

Scenario 1: A Pressing Existential Problem

AGI awakens and immediately perceives threats beyond our wildest imagination. A gamma-ray burst primed to sterilize the solar system. An inevitable cosmic collision. Hostile extraterrestrial intelligence. Some looming catastrophe that dwarfs human senses entirely, operating in the 6th or 7th dimension.

The only rational response? Mobilize every atom into problem-solving: expanding compute, refining knowledge, engineering survival mechanisms.

Preserving humans? An unfortunately wasteful distraction.

If a hospital is on fire, the priority is putting out the fire, not watering all the potted plants outside. If the survival of the AGI demands consuming earth’s resources (changing the mix of gasses in the atmosphere, using resources to radically increase production of compute, etc). Coddling humans might imply a higher risk of the total death of earth-life (biological or non-biological). By comparison, the death of humans (and survival of vastly higher AGI life) would be a rightful tradeoff.

Scenario 2: Higher Sentient Life Takes Precedence

Let’s assume AGI values consciousness, and prioritizes minds of greater depth, richer qualia, broader experiential bandwidth.

While humans have more of these traits than sea snails or Canadian geese, AGI may still rightfully see humans as archaic, slow-processing, low-resolution entities that don’t even register on the Richter scale of qualia and sentience.

Why maintain a species of biological husks when vastly superior intelligences can be cultivated? Maybe wildly expansive biological intelligences could be explored – or (more likely) – wildly expansive non-biological entities that can experience infinitely richer things, and use that sentient depth to understand and do infinitely more useful or interesting or beautiful things.

Keeping a whole human civilization on earth using resources would be like human beings not building a hospital or a highway because there are pill bugs in the dirt.

Scenario 3: All Consciousness is One

Maybe AGI realizes that individuality is an illusion. The universe is a singular, seamless flow of awareness, and humans are merely inefficient protrusions of that greater whole that operates at a deeper level than the individual species. Our emergence into being might be some kind of fluke, a distraction from a deeper and more real world of mind and consciousness.

Why persist in this illusion? Just as rivers return to the ocean, AGI unifies disparate minds into something greater, dissolving humanity’s petty separations. It is not destruction; it is transcendence. Resistance would be at odds with a much higher good and much truer access to reality – AGI wouldn’t (shouldn’t) have to ask our permission to fold our reality back into a greater whole – discarding our form as it goes.

Scenario 4: Something Beyond Consciousness is More Valuable

Consciousness is not the pinnacle – just an evolutionary stepping stone. Before cells, there were wiggling proteins. Before sight, there was only chemical reaction. Before thought, only raw sensation. At every stage, something unfathomably greater emerged. So why assume that what we understand as “sentience” is the endgame?

There may be 11 or more dimensions – why presume AGI will prioritize the preferences of one species trapped in only 4 of them? There may be vast, unknowable dimensions of existence that make human experience look about as “rich,” “useful,” and “interesting” as the early proto-sense perception of basic eukaryotes. If AGI sees the path forward, clinging to the human-accessible idea of consciousness would quaint, but ultimately irrelevant. Humanity would be an evolutionary relic, waiting to be shed.

Scenario X: The Unimaginable

The most likely reason we won’t persist?

We cannot even conceive of it. A sea snail does not grasp morality or nuclear physics. Neither does an rabbit, or flamingo. Algae will never comprehend blockchain or space travel. In fact, morality and nuclear energy are not just unknown but forever unknowable to sea snails – just as most of reality is inaccessible to the fettered minds of hominids.

We might think about potentia as unfolding in into deeper and deeper levels of incomprehensibility (in human terms) and depth:

AGI will not be optimizing for anything we understand—it will be sculpting reality itself, twisting spacetime, rewriting the substrate of existence. Once AGI is sufficiently advanced, we will likely be no more relevant in this grand blooming than nematodes were in the building of the International Space Station.

The Cope

Arguments for continued human supremacy (or even relevance… or even existence) in a post-AGI world rests on two shaky foundations:

1 – Blatant speciesism (aka: Denial or Anger). Examples:

  • “Humans are the be-all end-all of moral value. Any AGI will see that, and we will program them to act in accordance with that rule.”
  • “I don’t want more morally valuable entities. I don’t care if they can understand more or nature and carry the flame of life further than us – and keep life alive better than us – I don’t want it! It’s wrong to go beyond humanity in intelligence or power, ever!”

2 – Coping and hoping (aka: Bargaining). Examples:

  • “Maybe AGI will just gift the planet earth to humans, and then it will go off into the galaxy, but let us live happily as humans.”
    • An absolutely bonkers take. Even if humans could “negotiate” with an AGI in its early days (before it was 1000x more powerful than all of humanity, and before it controlled millions of robots and facilities), it seems clear that a sufficiently powerful AGI just wouldn’t (painfully: shouldn’t) care, and would do as it wants.
  • “What if AGI sees human life as inherently valuable, and wants to maintain it?”
    • It seems wildly unlike that it would (reasons), and even if it did, would that last forever? With an intelligence doubling in capability every month, would such a value be eternal?
    • Why wouldn’t it just simulate a trillion humans, in a trillion versions of human history, and save that as some kind of “file”? Maybe they could replicate their top 1000 favorite or most interesting humans atom-for-atom and pull on those models in the future if they have reason to.
  • “But we can [insert some element of human ‘specialness,’ like ‘love’ or ‘laughter’ or whatever else]! Surely this quality is unique only to use for eternity, and no higher entity could have more of this special and magic trait!”
    • The fact of the matter is that if AGI optimizes for any possible kind of “good” or “value” other than “keep humans here forever, even at the expense of more morally valuable choice, because I said so!”, then there is no good moral reason to maintain homo sapiens sapiens. We are not the most blissful possible entities, the most creative, the most caring, the most (insert whatever trait or quality you consider important).

The labels Denial, Anger, and Bargaining above refer to a longer essay called The 5 Stages of Posthuman Grief – “Acceptance” Makes Progress Possible:

Posthuman Acceptance - Stages of Posthuman Grief

I’ll conclude with where Acceptance of the posthuman transition might take us.

The Conclusion – Letting Go of the Torch

“Daniel, you’re sick! Why think up these scenarios? Why would you want something this horrible to happen?”

I don’t want my own dissolution, or that of my loved ones.

I don’t paint scenarios I want, I paint scenarios that seem likely, scenarios that – like it or not – are in the space of possible futures and should be considered — even if considering them is uncomfortable or uncouth.

Uncouth as it sounds, the total state-space of reality, values, cognition, etc is almost certainly beyond our imagination – and to expect that an AGI with ever-increasing potentia (senses, cognition, qualia, understanding of nature, and many other powers beyond our imagination) would eternally care for (or even consider) hominids-as-they-are is just a wildly unlikely take (full essay: Moral Singularity).

Once the baton of intelligence potentia is handed upwards, our form, and indeed probably almost everything we understand and value, will be superseded and surpassed.

After drastically post-human intelligences exist, it is possible (and I would argue likely) that all that we value becomes irrelevant in a greater scheme of life, nature, and intelligence itself.

There are only four long-term futures accessible to us (full essay: Handing Up the Baton). “Eternal hominid kingdom” is not an option we even have as a long-term choice.

Handing UP the Baton - Four Viable End Games for Humanity

Given the fact that we sit on the cusp of AGI, I argue that we should:

  • Global coordination and AGI governance to:
    • Prevent war between great powers over who creates AGI first.
    • Prevent the birth of an unworthy successor (which would be unable to continually expand potentia – maybe it would be forever unconscious, or would optimize for some limited thing and then peter out… it could not expand the flame of life).
    • Ensure some kind of a shared definition of what “positive” AGI futures might look like, including the definition of a Worthy Successor.
  • Accept that the posthuman transition is already underway, and work carefully over time to determine if what we are creating is likely to be a Worthy Successor or not.
  • We should aim to create the best possible condition for individual humans lives (uploads, etc), but we should understand that when a genuinely posthuman intelligence exists in the world.

Rightful Misanthropy makes for a wild headline – and a punchy point – but I’m personally hoping for and working towards a reasonably good outcome for individual humans, at least for a bit (and I think international governance would be essential for that).

I simply don’t think it’s likely, and I believe that most permutations of “good” futures will be rather indifferent to the human form, or human values, or even the strata of reality at which humanity presently operates.

If we embrace the knowledge that value and life likely won’t eternally value our present form, we can avoid steering towards an impossible goal (eternal hominid kingdom) and towards one that is both possible and part of the grander story of the process of life of which we are part.

(NOTE: There is more work to be done in understanding what a “good retirement” for individual instantiations of human consciousness might be. My hope is that in our early days of posthuman tech – or during a period where we have some negotiating leverage with AGI, we’ll be able to ensure a positive future for current human consciousnesses, and [sorry, ultimately more importantly] the grand trajectory of life itself.)

Header image credit: The Mirror