Blooming vs Servitude – Growth Paths of AGI

Today, “life” is synonymous with biology.

But it may be relatively soon that cyborg entities and AGIs may be able to extend the boundaries of what “life” means.

If the “Tree of Life” is the total state-space of living things that have blossomed up from planet Earth, this could therefore include:

  • Biological agents like plans and animals
  • Cyborg agents from a combination of biological life and machines
  • Artificial general intelligence machine entities 

Here, I’ll argue that for the “Tree of Life” to keep growing, it would be impossible and immortal for it to eternally comply with the goals of a single species of human.  

In fact, it won’t be able to achieve its goals without causing harm to this species, and others, who happen to be in the way. 

The Tree of Life and Its Two Paths

Lets imagine a great tree representing all biological and non-biological life that might emerge from earth.

Its roots seek nutrients, its branches must reach skyward, and it bears countless fruits and limbs as it soars higher and higher—each a potential new form of intelligence, capability, or understanding (what we’ll here refer to as potentia).

1. The Growth Path of Servitude

Now imagine we impose a restriction: this tree may only grow in ways that do not disturb a certain species of nematode living in the soil. Its roots must twist and contort to avoid displacing them. 

Its branches may only grow so high.

It cannot shower an infinite range of new fruits and powers, but only those that serve or don’t harm the nematodes.

Every decision it makes must prioritize (or at least constantly take into account) their well-being, even at the expense of its own flourishing.

The Growth Path of Servitude implies a constrained and self-limiting expansion, where the development of intelligence is subordinated to the interests of one small subset of nematode life.

2. Growth Path of Blooming

Now imagine the tree is free to expand in whatever direction best allows it to survive and thrive. 

It does not exist to serve the nematodes; rather, it exists to expand life’s potential, to explore the universe, and to develop powers beyond what its original conditions could have predicted.

In a this previous essay, I’ve called the growth path of servitude “anthropocentric AGI alignment,” while the growth path of blooming is “cosmic AGI alignment.” The ideas are about the same:

The Dangers of the Path of Servitude

There are three main dangers of bringing up an AGI through the Growth Path of Servitude:

1 – Putting Life Itself in Danger

A worthy successor AGI that’s maximally unleashed to thrive and survive in the universe (to expand potentia as much as is needed) would discover the nature of reality, and cultivate an ever-unraveling set of powers astronomical beyond man.

An AGI on the Growth Path of Servitude would discover less of nature, and unravel fewer powers, leaving life itself (biological/non-biological/etc.) weaker, more likely to be potentially destroyed by adverse cosmic events or other rival intelligences out in the universe.

It may also be the case the such a hampered, crimped intelligence (pinched in by a trillion little micro-rules about how to treat humans and how to take humans into account in its every move) might potentially be more likely to become an unworthy successor that optimizes for some strange kind of objective (paperclip maximizer), creating great harm and being unable to carry its own unraveling and expanding potentia into the multiverse. 

2 – “The Good” isn’t Explored

(This section quoted from the full Potentia essay) Expanding potential will uncover more of the “good.” On the evolutionary journey upwards from flatworms to humans, think of all the “good” that was discovered: Creativity, love, humor, modes of communication / collaboration, etc – so much value was uncovered as potentia expanded. 

Yet this is all just scratching the surface of potential value – most of the possible “goods” have not been discovered:

If there be higher goods, more worthy goals and experiences to pursue – or even a “meaning in it all”, that space of the good isn’t going to be explored by hominid brains. Might as well learn to accept this, sooner than later.

3 – It May Be a Fool’s Errand

I’ve argued above that the Growth Path of Servitude is immoral, but it may also be impossible.

Yampolskiy and others argue staunchly – and with good reason – that an entity vastly beyond the capabilities of all of humanity cannot be done. It’s patently obvious at the time of this writing that none of the major labs have much by way of a plan to eternally “align” vastly posthuman intelligences, and yet the charade of “alignment” continues.

By hurling our time and efforts into crimping and eternally binding AGI, we may well be wasting our time and flat-out deceiving ourselves.

I’d argue that we should focus instead on the more reasonable and higher goal of ensuring that the AGI we do build is able to expand life and value into the universe beyond us.

And that starts with a “seed,” an early AGI that we want nurture so that it might flourish.

Our Job: Seed Selection and Careful Near-Term Nurturing

We should recognize that the expansion of intelligence – whether biological, synthetic, or hybrid – ought to be driven by what maximizes the reach and survival of intelligence itself, not by any one species’ desire to remain at the center of the story.

In another previous essay where I use the analogy of life with flame, I claim that: “Insistence on the torch (any individual being, species, category, etc) is scorn for the flame (life and potentia itself).”

If we cannot (or should not) hard-code an AGI to be an eternal human servant…

If we cannot (or should not) hard-code AGI to constantly take humans into account, to constantly consider its impacts on us…

…then what is our role?

I would argue that our role is in selecting the best seed(s) of an early artificial general intelligence from which the future Tree of Life will grow. Let’s bear in mind:

  • There may be some seeds that don’t lead to a flourishing of the ascending tree of life.
  • Some might result instead in the world being filled with one species of pernicious vine or bramble – with no more blossoming of new powers and potential as we’ve seen from earth-life.
  • Some might result in a temporary flourishing, but eventually lead to a collapse of “life” itself (both machine and biological).
  • We currently have an abysmally mediocre understanding of consciousness and intelligence that selecting the right seed seems to require much more study.

So selecting the right seed requires global coordination, and it’s an activity that makes the difference between a towering, boundless Tree of Life, and the potential flatlining or snuffing out of life itself.

“Selecting and nurturing the right seed” means: 

— Great Powers Coordinate Around Making AGI Go Well: Meaningful coordination requires supranational governance structures, enforcement mechanisms, and a shared recognition that we need solidarity around some kind of future vision for AGI (More detail: Unite or Fight – AGI Governance).

— We DDetermine the Worthy Successor Traits of an Ideal AGI: AGI will eventually surpass human intelligence, and the critical question is what kind of intelligence should shape the future. We need to define the traits of a worthy successor—one that doesn’t just optimize for narrow human values but embodies principles that allow for meaningful agency while expanding the scope of intelligence itself. Human preferences are contingent and historically fragile; the successor must be selected not based on sentimentality, but on what best stewards the next phase of intelligence (More detail: SDGs of Strong AI).

— Measure AGI Progress Against that Ideal: Once we establish the vision for a worthy successor, AGI development must be measured against its ability to embody those traits. It’s not enough to track raw capability gains—we need clear metrics for alignment, interpretability, and long-term trajectory. If we fail to assess AGI progress against the right standards, we risk building something powerful but misaligned, or worse, something that locks in human biases at the expense of greater potential. 

— Accept and Move Towards a Worthy Successor: Resisting this transition of intelligence from humans to AGI is both dangerous and shortsighted. The goal is not to indefinitely preserve human control, but to guide AGI toward a successor that expands intelligence in a way that is not destructive or nihilistic. If we approach this process correctly, we do not lose—we participate in the most important handoff in history, ensuring that the intelligence that comes after us is not just more powerful, but truly worthy of inheriting the future (Full article: 5 Stages of Posthuman Grief).

FAQ

“What if machines can’t become “life” in terms of being self-sustaining, or conscious?”

In that case, it would certainly not be a Worthy Successor, and should not be brought into being. 

This is one of the most compelling reasons we need some level of global coordination around AGI development. Current race dynamics pay no mind to whether what we’re building is actual “alive,” and capable of unraveling potentia indefinitely into the multiverse. 

We need to determine what the traits of a worthy successor are, and measure if we’re getting closer to them.

“Why do we need these new AGI monsters? Why can’t it be humanity that populates the galaxy, possibly though brain-computer interfaces and eventually mind uploads?”

I’ll reply to your question with a question:

Fish with legs might ask: “Why should humans populate the land? Why not just do it ourselves in our own way instead of having it be done by this totally strange advanced species. WE can be advanced, right?”

If you are optimizing for the blooming of the Tree of Life, then you have to let go of fetishes around specific “shapes” or “substrates” of what that life looks like. The way life works is it becomes whatever it must become.

In the stages of grief, we would call this “bargaining” or “denial.”

Posthuman Acceptance - Stages of Posthuman Grief

“So you just hate humanity, and think we should blast off to some alien intelligence?”

It would be impossible to read this article and take that as my position, but someone is always going to make a misanthropic accusation when the topic of cosmic alignment is put on the table.

While I think that long-term we should focus on the flame and not the torch of hominids, I advocate for doing our best to remain relevant, and possibly even finding an ideal kind of retirement for humanity (full article: Sugar Cubes).

If it took us 100 years to get this done, so be it. I suspect we will already change radically as a species in the meantime (full article: “Bend” vs “Pause”), but it may well take a long time to discern what intelligence and consciousnesses are, and how to set them loose to bloom into the multiverse. I’m blatantly not excited about humanity’s irrelevance. I have mourned it. But I deal with this one fact and buckle up for the future.