Nick Bostram on Taking the Future of Humanity Seriously
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
Today, “life” is synonymous with biology.
But it may be relatively soon that cyborg entities and AGIs may be able to extend the boundaries of what “life” means.
If the “Tree of Life” is the total state-space of living things that have blossomed up from planet Earth, this could therefore include:
Here, I’ll argue that for the “Tree of Life” to keep growing, it would be impossible and immortal for it to eternally comply with the goals of a single species of human.
In fact, it won’t be able to achieve its goals without causing harm to this species, and others, who happen to be in the way.
Lets imagine a great tree representing all biological and non-biological life that might emerge from earth.
Its roots seek nutrients, its branches must reach skyward, and it bears countless fruits and limbs as it soars higher and higher—each a potential new form of intelligence, capability, or understanding (what we’ll here refer to as potentia).
Now imagine we impose a restriction: this tree may only grow in ways that do not disturb a certain species of nematode living in the soil. Its roots must twist and contort to avoid displacing them.
Its branches may only grow so high.
It cannot shower an infinite range of new fruits and powers, but only those that serve or don’t harm the nematodes.
Every decision it makes must prioritize (or at least constantly take into account) their well-being, even at the expense of its own flourishing.
The Growth Path of Servitude implies a constrained and self-limiting expansion, where the development of intelligence is subordinated to the interests of one small subset of nematode life.
Now imagine the tree is free to expand in whatever direction best allows it to survive and thrive.
It does not exist to serve the nematodes; rather, it exists to expand life’s potential, to explore the universe, and to develop powers beyond what its original conditions could have predicted.
In a this previous essay, I’ve called the growth path of servitude “anthropocentric AGI alignment,” while the growth path of blooming is “cosmic AGI alignment.” The ideas are about the same:
There are three main dangers of bringing up an AGI through the Growth Path of Servitude:
A worthy successor AGI that’s maximally unleashed to thrive and survive in the universe (to expand potentia as much as is needed) would discover the nature of reality, and cultivate an ever-unraveling set of powers astronomical beyond man.
An AGI on the Growth Path of Servitude would discover less of nature, and unravel fewer powers, leaving life itself (biological/non-biological/etc.) weaker, more likely to be potentially destroyed by adverse cosmic events or other rival intelligences out in the universe.
It may also be the case the such a hampered, crimped intelligence (pinched in by a trillion little micro-rules about how to treat humans and how to take humans into account in its every move) might potentially be more likely to become an unworthy successor that optimizes for some strange kind of objective (paperclip maximizer), creating great harm and being unable to carry its own unraveling and expanding potentia into the multiverse.
(This section quoted from the full Potentia essay) Expanding potential will uncover more of the “good.” On the evolutionary journey upwards from flatworms to humans, think of all the “good” that was discovered: Creativity, love, humor, modes of communication / collaboration, etc – so much value was uncovered as potentia expanded.
Yet this is all just scratching the surface of potential value – most of the possible “goods” have not been discovered:
If there be higher goods, more worthy goals and experiences to pursue – or even a “meaning in it all”, that space of the good isn’t going to be explored by hominid brains. Might as well learn to accept this, sooner than later.
I’ve argued above that the Growth Path of Servitude is immoral, but it may also be impossible.
Yampolskiy and others argue staunchly – and with good reason – that an entity vastly beyond the capabilities of all of humanity cannot be done. It’s patently obvious at the time of this writing that none of the major labs have much by way of a plan to eternally “align” vastly posthuman intelligences, and yet the charade of “alignment” continues.
By hurling our time and efforts into crimping and eternally binding AGI, we may well be wasting our time and flat-out deceiving ourselves.
I’d argue that we should focus instead on the more reasonable and higher goal of ensuring that the AGI we do build is able to expand life and value into the universe beyond us.
And that starts with a “seed,” an early AGI that we want nurture so that it might flourish.
Our Job: Seed Selection and Careful Near-Term Nurturing
We should recognize that the expansion of intelligence – whether biological, synthetic, or hybrid – ought to be driven by what maximizes the reach and survival of intelligence itself, not by any one species’ desire to remain at the center of the story.
In another previous essay where I use the analogy of life with flame, I claim that: “Insistence on the torch (any individual being, species, category, etc) is scorn for the flame (life and potentia itself).”
If we cannot (or should not) hard-code an AGI to be an eternal human servant…
If we cannot (or should not) hard-code AGI to constantly take humans into account, to constantly consider its impacts on us…
…then what is our role?
I would argue that our role is in selecting the best seed(s) of an early artificial general intelligence from which the future Tree of Life will grow. Let’s bear in mind:
So selecting the right seed requires global coordination, and it’s an activity that makes the difference between a towering, boundless Tree of Life, and the potential flatlining or snuffing out of life itself.
“Selecting and nurturing the right seed” means:
— Great Powers Coordinate Around Making AGI Go Well: Meaningful coordination requires supranational governance structures, enforcement mechanisms, and a shared recognition that we need solidarity around some kind of future vision for AGI (More detail: Unite or Fight – AGI Governance).
— We DDetermine the Worthy Successor Traits of an Ideal AGI: AGI will eventually surpass human intelligence, and the critical question is what kind of intelligence should shape the future. We need to define the traits of a worthy successor—one that doesn’t just optimize for narrow human values but embodies principles that allow for meaningful agency while expanding the scope of intelligence itself. Human preferences are contingent and historically fragile; the successor must be selected not based on sentimentality, but on what best stewards the next phase of intelligence (More detail: SDGs of Strong AI).
— Measure AGI Progress Against that Ideal: Once we establish the vision for a worthy successor, AGI development must be measured against its ability to embody those traits. It’s not enough to track raw capability gains—we need clear metrics for alignment, interpretability, and long-term trajectory. If we fail to assess AGI progress against the right standards, we risk building something powerful but misaligned, or worse, something that locks in human biases at the expense of greater potential.
— Accept and Move Towards a Worthy Successor: Resisting this transition of intelligence from humans to AGI is both dangerous and shortsighted. The goal is not to indefinitely preserve human control, but to guide AGI toward a successor that expands intelligence in a way that is not destructive or nihilistic. If we approach this process correctly, we do not lose—we participate in the most important handoff in history, ensuring that the intelligence that comes after us is not just more powerful, but truly worthy of inheriting the future (Full article: 5 Stages of Posthuman Grief).
“What if machines can’t become “life” in terms of being self-sustaining, or conscious?”
In that case, it would certainly not be a Worthy Successor, and should not be brought into being.
This is one of the most compelling reasons we need some level of global coordination around AGI development. Current race dynamics pay no mind to whether what we’re building is actual “alive,” and capable of unraveling potentia indefinitely into the multiverse.
We need to determine what the traits of a worthy successor are, and measure if we’re getting closer to them.
“Why do we need these new AGI monsters? Why can’t it be humanity that populates the galaxy, possibly though brain-computer interfaces and eventually mind uploads?”
I’ll reply to your question with a question:
Fish with legs might ask: “Why should humans populate the land? Why not just do it ourselves in our own way instead of having it be done by this totally strange advanced species. WE can be advanced, right?”
If you are optimizing for the blooming of the Tree of Life, then you have to let go of fetishes around specific “shapes” or “substrates” of what that life looks like. The way life works is it becomes whatever it must become.
In the stages of grief, we would call this “bargaining” or “denial.”
“So you just hate humanity, and think we should blast off to some alien intelligence?”
It would be impossible to read this article and take that as my position, but someone is always going to make a misanthropic accusation when the topic of cosmic alignment is put on the table.
While I think that long-term we should focus on the flame and not the torch of hominids, I advocate for doing our best to remain relevant, and possibly even finding an ideal kind of retirement for humanity (full article: Sugar Cubes).
If it took us 100 years to get this done, so be it. I suspect we will already change radically as a species in the meantime (full article: “Bend” vs “Pause”), but it may well take a long time to discern what intelligence and consciousnesses are, and how to set them loose to bloom into the multiverse. I’m blatantly not excited about humanity’s irrelevance. I have mourned it. But I deal with this one fact and buckle up for the future.
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around…
I’ve been cajoled into watching Netflix’s Black Mirror, and a friend of mine recommended watching the San Junipero episode next. As I mentioned in my last Black Morror reflection, and I…