There is No Pause – We Must Bend the Posthuman Trajectory

There are people who imagine a future in the year 4000 which is nearly identical to 2025 but with robot butlers, travel to 

These people believe, understandably, that humans do not want post-human or even trans-human transitions. The augmenting of the mind, or creation of minds vastly beyond humans seems obviously sacrilegious and something most humans will obvious resist.

But in this article I’ll argue that the opposite is true, that in fact:

  • There is little energy in preserving humanity as it is, and the forces pulling us to transcending the human form and human condition are inevitable and strong
  • We are already in the process of becoming post-human
  • We must accept that this process has already begun and aim to bend it’s trajectory, rather than pretending we can wait to “get AI right” before we decide what we turn into

The trajectory is already starting, and we must think about guiding it well now, as if it is a process in motion, rather than as if it is a process we can “push start” on in 2 or 10 or 200 years (as some suggest).

In the Intelligence Trajectory Political Matrix I discuss how different people have different kind of “end goals” that they might want to pursue. I’ll argue in this article that the “Preservation” camp is actually somewhat weak – and that any small inclination towards Progression very quickly turns into Ascension and posthumanism – and I’ll argue that we should deal with innovation and regulation as it that transhuman transition was already underway.

I’ll argue that we must focus on bending, rather than entirely pausing, the posthuman trajectory:

Below I’ll list three of the most compelling reasons that we’re moving swiftly to post humanism already (with no option to freeze “homo sapiens 1.0”). 

I’ll conclude with what this implies for public policy and steering our next steps as a species.

Reason 1: The Hedonic Treadmill (for Pleasure or for Power)

The hedonic treadmill is the idea that an individual’s level of happiness, after rising or falling in response to positive or negative life events, ultimately tends to move back toward where it was prior to these experiences.

Winning the lottery doesn’t drive up long-term fulfillment. 

People in 2024 with 3000 TV channels (and YouTube) aren’t tangibly “happier” than previous generations who just had newspapers or AM radio and little else.

Here’s how this plays out with regard to the posthuman transition:

  • People get access to an AI assistant that ends up giving great advice for their career, but then they demand an agent that helps with dating advice, or mental health, or whatever else – all the way to replacing their friendships, mentors, or even their romantic partners with hyper-personalized selfless AI agents.
  • People get an AI / VR device that delivers hyper-personalized experiences of relaxation before bed – but then they want this device to conjure personalized humor or entertainment experiences for them, or educational experiences for them, etc.
  • People get access to AI that can automate the process of building a grocery list, but then the want AI to actually buy the groceries at the lowest cost and have it arrive at the perfect time. Then they want a robotic service to put the grocers in the fridge – or cook them into a meal. And on and on.

You say:

“But Dan, most people aren’t just going to escape into some AI world of pleasure simply because the technology makes it easy to do so.”

I’d reply immediately by telling you most people are seeking escape, and that drug and pornography and video game addiction – and indeed the religious ideas of Samsara and the veil of tears (among others) are all pointers to the fact that most people want to escape the state of nature. If given a viable option, they would.

But also, pleasure is not the only pull to digital immersion. Power is also a motive.

We can think about the first batches of ravenous tech adopters who wave good-bye to the human condition as roughly fitting into two camps – explored in more detail in the full World Eaters vs Lotus Eaters article:

Lotus Eaters and World Eaters - danfaggella.com

Those who want to be productive in the “real” world will not do so by walking around in the monkey suit, but by welding more and more of their volition through myriad virtual means. The cost of being ambitious in the era of AI-enabled work and early brain-computer interface tech will be giving up the current human condition almost entirely (full essay: Ambitious Tech).

This won’t be for “tech fanatics”, this will be a necessity for anyone who wants to be productive and relevant in the increasingly virtual future. It’s an attractor state that I would argue can’t be stopped.

It should also be noted that even the base pursuit of pleasure in AI / VR worlds will not simply be a simulation of the current human condition (i.e. being in a hot tub with Tinashe or something), the hedonic treadmill will quickly stretch into realms of experience that don’t have proxies for the natural world – into strange, personal universes of experience optimized not for fidelity to reality, but to whatever the user responds to (full essay: The 7 Phases of Posthumanism).

People aren’t in some way eternally “satisfied” (in their wellbeing, productivity, or anything else) by some specific single improvement in their personal or professional lives.

 They immediately look out for the next improvement, the next advancement, the next convenience, the next saver of time or money, or the next faster mode of fulfilling their core desires.

Reason 2: The Shifting of Norms in Young People

As I mention in myYour Dystopia is Myopia article, when people think of radical changes to the human condition (such as: Humans living in VR worlds most of their waking hours, or humans replacing most of their human relationships with more fulfilling and useful AI relationships, or humans choosing to get brain-computer interfaces to allow them to understand more of the world or get more done by splitting their brain activity in multiple tasks at once) they’ll typically say:

“That would never happen, because (a) that technology is impossible, and (b) people wouldn’t never do something so unnatural!”

But we are already monstrously “unnatural” in our daily lives today compared to our ancestors just 2-3 generations ago – and our world today is full of tech that was previously considered “impossible.”

To paraphrase and riff on Planck’s Principle.

“Science progresses one funeral at a time” – or – “Tech adoption progresses one birth at a time.”

My father did a thousand “unthinkable” things in the eyes of my grandparents. He grew his hair long, he married someone who wasn’t Italian, he listened to rock and roll music.

  • Do you think for even 5 seconds that children who have spent the first 10 years of their lives 6 inches from an iPad screen will revere the “sacredness” of the “real” (as opposed to the “virtual”) in the same way you do?
  • Do you think for even 5 seconds that a child who grows up with an all-knowing, kind AI tutor (who knows their preferences and moods and tailors all communication to them) will revere the “sacredness” or “human relationships” (as opposed to those with machines) in the same way you do?

Humans tend to think that the music that was popular during their coming of age is the greatest must ever created, and all that follows is junk.

Humans tend to think that technology introduced when they were 20-30 is “cutting edge”, and technology beyond that point is unnatural and strange.

Even without and fundamental breakthroughs or advancements in AI tech, we’re nearing the level where programmatically generated experiences will replace many of our usual ones.

Young people with exposure to the tip of the iceberg of this tech will not say “Oh, but this is unnatural, I shouldn’t do this,” any more than my father thought “Oh no, this ‘rock and roll’ is clearly a bad thing I better not listen to it.”

People will do whatever helps them achieve pleasure or power, and it new, interesting options arrive (especially if those options are already popular among some cliques of the youth), young people will jump right in. Posthumanism is on its way now, and we must bend its trajectory as the process occurs, there will be no waiting 100 years “once we get AGI right” (if such a thing is even possible).

(Ben Goertzel discusses how youth are pushing AI adoption in his interview here. Lambert Hogenaut – who leads forecasting at the United Nations – discusses how willing the youth is to adopt radical changes to the human condition in his interview here.)

Reason 3: No One Wants “Real” or “Natural” Anyway

I consider this to be the most damning reason that there is no pause button the posthuman transition.

The argument goes like this:

  • We are wired for fulfillment, not for truth. The brain evolved not to seek what’s “real,” but to chase what satisfies its core reward circuits—pleasure, admiration, connection, dominance. If a digital experience fires those circuits more effectively than reality, we will choose it, just as we already prefer sugar-laden foods over bland but “real” ancestral diets.
  • Most of what we love is already artificial. The music blasting in your ears, the carefully lit movies that make you cry, the social media posts engineered to get likes—none of these are “natural.” They are technological hacks of our emotions, and we embrace them fully. AI-generated immersion will simply be a more refined and irresistible version of this.
  • We abandon “real” the moment a synthetic alternative more reliably fulfills a need or drive. People choose porn over real partners, social media over face-to-face interaction, and video games over real-world challenges. The shift isn’t about deception—it’s about efficiency. If an AI-generated world gives us admiration, relaxation, and love with fewer costs and risks, why wouldn’t we take it?

Here’s a graphic that expresses this idea:

Closing the Human Reward Circuit
  • Reality is full of friction—synthetics are frictionless. A real conversation involves awkward pauses and social risk. A virtual AI companion can respond flawlessly, with perfect timing and empathy. A real career demands years of struggle; a virtual world can give instant status and purpose. Humans are friction-minimizing machines, and AI immersion is the ultimate lubricant.
  • The “real” argument is already lost. We don’t seek truth; we seek stimulation. We don’t reject artificiality; we refine it. The question isn’t whether people will choose AI immersion over reality—the question is how fast and how deeply they will dive once it’s compelling enough. Eventually, they dive in so deeply that there is almost nothing “human” left (just as there is relatively little in common between humans and the first worms that we eventually evolved from), and we have to accept this.

This is explored in more depth in You Don’t Want What You Think You Want, and in Closing the Human Reward Circuit.

People do not want “real” or “natural”, they want the fulfillment of their drives.

Humans constantly ambulate between drives, and as soon as their goals (pleasure, power, or otherwise) are fulfilled more cost-effectively through something “not real” or “not natural”, they’ll adopt it. 

Sacredness is not a barrier that will prevent changes in the human condition. Your personal level of discomfort is not going to stop the rest of humanity, particularly younger people, from moving ahead onto monstrous new kinds of experience. 

Conclusion – Let’s Talk About What We’re Turning Into

The forces holding onto “2020s human experience” are giving way to the obliteration of the human experience at a faster pace than any other decade in human history maybe by a factor of 100.

The hedonic treadmill, the new “norms” of the youth, and human nature itself are already in the process of turning us into something beyond or other than what is human.

Given that this transition is underway already, we should be discussing real, viable futures, not “eternal hominid kingdom” fairytales.

We need to ask:

  • If we’re turning into something already, something beyond humanity – what should we be turning into? (Perhaps a kind of Worthy Successor?)
  • How can we get the US, China, and other nations on the same page about what we mean by “preferable” and “non-preferable” futures? (Full article: AGI Governance – Unite or Fight)
  • How can we measure and detect if we’re moving closer to such a preferable, beneficial future?
  • How can we prevent all-out conflict as new permutations of mind come into being in the same physical space?

Taking our heads out of the sand means recognizing that we need to think about our direction now.

Transcendence is already well under way.

We must bend (for the better) the posthuman trajectory – rather than attempt to pause or stop it.

Enough with fairytales of eternal AI caretakers and the 2020’s lasting forever, or even homo sapiens lasting much longer. Let’s be careful that we don’t move recklessly and screw up the hand-off, but let’s talk frankly about what’s next, because the lilly pad we’re standing on now is sinking.

(Note 1: For the record, I am not against efforts like “Pause AI” in principle. It may well be that slowing things down is best, and I’ve long argued that global AGI governance in order to slow things down would be a net good. I simply believe it is false to think that completely pausing is possible. We can bend the trajectory of change, we cannot brook the waters entirely. This is the one fact the world hates – but we must face it manfully and paint possible futures amongst the uncomfortable changes that inevitably lie ahead.)

(Note 2: Getting leading thinkers to discuss what a worthy successor is, and how to move towards it, is what I’m doing in the Worthy Successor interview series on The Trajectory YouTube channel. Tune in if that’s a dialogue you want to be part of, and feel free to suggest guests. Bostrom and Bach were really fun ones.)