5 Reasons to Discuss the Worthy Successor Now

When I discuss potential posthuman trajectories, and the need to eventually let go of the hominid form and allow potentia to bloom, people will often ask:

“Why talk about posthuman futures and Worthy Successors now, when AGI risk is a much more near-term threat!”

Even those of us who advocate for the Worthy Successor (WS) talk about this, and over the course a a half-dozen dialogues, a few reasons have some up repeatedly.

In this article I’ll talk through five reasons why WS discourse is important now (bear in mind I’m writing this article in April 2025).

The reasons:

  1. Increase the odds of creating valuable posthuman life.
  2. Soothing fairytale “forever human” futures do more harm than good.
  3. It may build top-down momentum towards international governance (from politicians, IGOs).
  4. It may build bottom-up momentum towards international governance (by jarring everyday people)
  5. Most policy and AGI talk will be anthropocentric anyway.

We’ll start at the top:

Reason 1: Increase the Odds of Creating Valuable Posthuman Life

Discussing what “value” is and what a Worthy Successor might be may help us to at least get those ideas into circulation.

Even if no intergovernmental policy discussions are had about a kind of united “North Star” vision of what “Worthy” AGI is, even if such conversations remain relegated to Twitter, some small tech and philosophy circles, and the halls of AGI labs, it might be argued to be better to have some active discourse about what “worthy” means, and which moral qualities should be measured and built for.

Without any international governance there might still be international governmental or Track II dialogues about these topics, think tank papers written on this topic, and to have a locus of terminology (Cosmic Alignment, Potentia, Worthy Successor) may at least nudge national policies and actual AGI lab progress ever so slightly towards some of these traits. 

Reason 2: Soothing Fairytale “Forever Human” Futures Do More Harm Than Good

Reacting to uncomfortable truths with anything less than acceptance undermines our capacity for critical thinking and decisive action—our only hope for shaping a better future.

The following is a screen shot from the 5 Stages of Posthuman Grief illustrates the importance of picking realistic goals, and the detriment of ignoring trends or holding onto comforting fantasties:

If we want a good future (which I’m sure all the eternal hominid kingdom advocates would say they’re fighting for), it makes sense to be frank and honest enough to talk about futures we can actually obtain and work towards.

Screaming “our current lily pad is sinking!” is fine, I guess, but its much better to ask: “which shore can we actually reach, and which next lily pad should we actually jump on?”

Reason 3: It May Build Top-Down Momentum Towards International Governance

The entire premise of the Worthy Successor hinges on some level of international governance (potestas). 

We might sum up the premise this way:

  • a) We must discern the morally valuable traits of the intelligences that will carry the flame of life beyond us.
  • b) We must align global AGI efforts to avoid wildly dangerous capabilities, and coordinate global efforts on making progress on building and testing for worthy traits.
  • c) Point (b) cannot be achieved in the current AGI arms race, some international governance would be necessary.

It seems reasonable to suspect that anyone truly interested in carefully getting to a WS would want to encourage a dynamic other than an arms race. In an arms race makes it impossible to discern and move towards a WS because all resources must only be allocated to building capability and outrunning adversaries – with no mind paid to morally worthy traits or even the immediate dangers of AGI.

Also, many people who think seriously about the WS are also quite involved in national and intergovernmental efforts to put AGI governance on the table. Duncan Cass-Begg’s AGI governance frameworks with CIGI are among the best work on what AGI governance might look like in practice that I’ve read thus far. And I’ve written Unite or Fight and The SDGs of Strong AI well over 6 years ago.

From: “The International Governance of AI – We Unite or We Fight” https://emerj.com/international-governance-ai/

I don’t expect to be among the geniuses who can figure out the nuts and bolts of what such governance looks like in action, but I do expect that the WS dialogue will largely support such thinikers.

(Note: I’m sure there is a cohort of people who advocate for a WS and who also believe that no governance is needed. “Any successor we conjure will be worthy” they say. Or, “any global coordination is tantamount to tyranny and that’ll make things worse,” they say. That certainly isn’t my opinion.)

Reason 4: It May Build Bottom-Up Momentum Towards International Governance

I come from a rather small town in the southern part of Rhode Island, the smallest state in America. 

Where I come from, “AGI” doesn’t mean anything, and any talk about AGI risk is seen as science fiction. Maybe the odd person thinks to themselves “Maybe in 20 years this stuff will take someone’s job, but surely not mine.”

That said, if everyday folks hear about a group of people who are accepting the fact that posthuman life will eventually overtake humanity, and are planning how to build such an entity, this may shake them into wanting to advocate for international AGI governance (since if AGI is regulated in the USA, but not in China, there’s no preventing a vastly posthuman intelligence from emerging).

The theory here is that some everyday citizens are not going to be shocked into taking AGI governance seriously by hearing people say “AGI is going to end the world!” But those same people might be jarred into taking a political stance on AGI if they hear “Some people are planning for a future where some kind of AGI will be more important than humans!”

All things being equal, if the Political Singularity happens before the Technological Singularity, we’ll likely be in a much better position to influence the trajectory of intelligence than if the citizenry remains ignorant and arms race dynamics blast us recklessly into AGI.

Reason 5: Most Policy and AGI Talk Will be Anthropocentric Anyway

In additional to adding to the causes of AGI risk in some important ways (especially in hopefully encouraging governance), discussing the Worthy Successor idea also doesn’t take the “air out of the room” for purely anthropocentric AGI concerns.

Most discussions around AGI risk will safely be in the Denial stage (see: 5 Stages of Posthuman Grief) when it comes to the stages of grief. Accepting eventual human attenuation won’t be the norm for quite some time (I’m writing this in April 2025, maybe in a few years this will change).

If there a thousand people and causes advocating for stopping the AGI arms race, and keeping AI as a tool.

Worthy Successor discourse (discussing morally valuable traits and how to build and measure them in machines, and how to achieve posthuman blooming of potentia) won’t in any way stop those discussions from happening.

Concluding Thoughts – The Transition from Anthropocetrism to Cosmism

In the meantime, Worthy Successor dialogue amplifies many aspects of the AGI risk dialogue and the AGI governance dialogue, while adding what some of us believe to be an important element of realism and cosmic perspective (rather than entirely anthropocentric perspective) into that same broader discourse. 

So let’s say we do achieve some kind of imperfect global coordination around AGI, and overt efforts to prevent human extinction and dangerous, uncontrollable AGI?

If such coordination occurs, it’ll almost certainly be done under a purely anthropocentric guise.

“Don’t let the machine take our jobs!”

and…

“Humanity is sacred forever, and should never be altered or surpassed!”

and…

“Nothing is or should ever be more morally valuable than a human being!”

These sorts of chants will definitely be those that rally the masses in the Great Power nations who need to coordinate (mostly, USA and China).

But isn’t this at odds with the long-term aims of those who would advocate for a Worthy Successor?

Not as much as you’d think. Here’s my reasoning:

1. Worthy Successor Advocates Mostly Want a Slow-Down in the Near-Term Anyway

In order to get to a WS, we would very much need some time to ensure that current AGI has WS qualities (especially consciousness and the ability to expand potentia). 

It doesn’t make sense to risk putting out the human torch if we have no idea if our next torch (AGI) actually carries the flame of those morally valuable traits. In that regard, most WS advocates would be fine with an anthropocentric conservatism in the near-term, assuming we get to some degree of governance.

AGI Alignment - Cosmic vs Anthropocentric

2. Long-Term, Most Anthropocentrics Will Become Much More Cosmic

It seems likely that most people who are purely anthropocentric today will become much less married to the human form with time – as AI/VR experiences become more immersive and BCI becomes viable. Everyone I knew who said “I’ll never use social media” now does, and every third person I knew who said “online dating is for serial killers” met their spouse through Hinge or Tinder.

Most of them will try the green eggs and ham and like it, and will not end up being the kind of eternal enemies of cosmism that they think themselves to be now. They’ll walk down the ladder from denial to acceptance as some of us already have.

The following is a ChatGPT summary of the three main points in my Bend, not Pause article, illustrating forces most likely to decrease anthropocentrism over the near-term:

So all in all I’m not that worried about tension between the anthropocentric AGI governance and safety people and the WS camp.

But if the Political Singularity happens, and you’re purely anthropocentric and consider cosmism to be wrong and all cosmoses to be evil enemies of humanity.. and you feel that you have to kill me – do it without malice in your heart.