Nick Bostram on Taking the Future of Humanity Seriously
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
The guiding ideals of most humans are naturally oriented towards the benefits of, well, human beings.
“We should create technologies that improve the lives of current and future human beings.”
–
“We should prevent harm to our planet because if we destroy it, we destroy the ecology that all humans rely on.”
–
“We should develop AGI to align to human needs and empower us to live free from war, scarcity, and want.”
Even considering vastly powerful artificial general intelligence, people speak about our aims as a global community as being entirely centered on the upside and downside for humanity (some people also include the broader biosphere in their stated concerns).
Our talk of the AI future reflects this. “Best cases” involve AGI eternally serving homo sapiens. “Worst cases” involve basically any scenario where humans are not the sole locus of moral value and volition in the cosmos.
Any future scenario that implies benefits to non-human life, and any future where visibly homo-sapien-like entities are somehow not in “control” is commonly seen as “bad.”
But this limited anthropocentrism is not ubiquitous.
“Every true man is a cause, a country, and an age; requires infinite spaces and numbers and time fully to accomplish his design…” – Emerson, Self-Reliance
In fact, among thinkers who have thoroughly considered the moral and psychological development of individuals and civilizations, there is a nearly inevitable pull towards cosmic, post-human aims and ideals.
And on the dawn of the birth of posthuman minds (from AGI, from brain-computer interface, or otherwise), it behooves us to urgently seriously consider cosmic moral aspirations rather than purely limited and anthropocentric ones.
In this article I aim to demonstrate:
I’ll list out some modern and ancient models of moral and psychological development, and show their general tendency to expand the moral circle, and become more concerned with the abstract (not only beyond oneself or one’s tribe, but often beyond even one’s species).
I’ll unpack the “driving force” under this tendency towards the cosmic – and I’ll outline why it’s crucial for us to encourage cosmic moral aspirations now at the dawn of AGI.
We’ll take a look at various “models of development” from ancient to modern times, across philosophical and psychological lenses, and get a sense for the trend towards the cosmic that we often see as a commonality throughout them:
(Note: For each of the six modern models above, I create a much more expansive write-up [here], highlighting how these models tie into cosmic moral aspirations, including a robust set of quotes and graphics from the original authors. Because all of that detail was a bit much for this Cosmic Moral Aspirations article, I created an entirely separate article. If you have any other interesting models that help to explore these ideas, ping me on X.)
The ancient models of moral aspirations were anthropocentric in that they were imagined by people, and touted mostly to order the affairs of civilizations, or the minds of individual humans.
They are all completely soaked in anachronisms of their time (as are your ideas and mind, dear reader). Confucius and Plato thought of foreigners as barbarous, and most Sufi mystics or Taoists didn’t really understand civilizational “progress” in the sense that now see it in 2025 (technologically and culturally, their world moved vastly slower than ours, and Darwin’s insights wouldn’t arrive for centuries after them).
Of course this doesn’t mean they were advocates for posthumanism, or the path of blooming for AGI. The very notion would have been almost unimaginable to most of them.
The modern models are similarly anachronistic, and also have no direct reference to AGI or man-made posthuman intelligences (in fairness, how could they?).
Both old and new, these models ubiquitously point to ideals of development (philosophically, morally, psychologically) that are beyond human comprehension, or could only be explored by higher and more developed minds than those of humans.
What strikes me immediately about the examples above – modern and ancient – is that one gets the impression that if we could speak with these thinkers now, at the potential dawn of posthuman minds, we would expect them to consider the vast expanses of knowledge, of experience, of power, of “the good” in a way that was also posthuman. In a way that could continue to expand up and outward as their moral instincts already took them.
The good news is that some of the thinkers above aren’t dead, and we can speak to them. Peter Singer was on my Trajectory podcast, and as it turns out – he’s very much an advocate for blossoming life and power and moral aspirations beyond humanity, and even potentially beyond sentient wellbeing.
Even among thinkers who limit their frameworks to their application to humanity (Maslow, Gebser, Kohlberg were not imagining brain-augmented humans, AI superintelligences, etc), there is a nearly universal trends for moral aims to reach beyond the individual, beyond the tribe, and even beyond humanity itself.
Commonalities we see:
To be clear, I don’t believe any of these moral, psychological, or religious systems to be completely “right,” but they show a clear, cross-cultural understanding of the components above. Cosmic moral aspirations are, as it turns out, remarkably common across time and space.
Certainly there is credence to the idea that it is in the thinker’s best-interest to elevate abstract thinking and ideas. Many teachers or mothers will tell you it’s “the hardest” or “most important” job in the world. Bonaparte extolled the role of great captains and rulers, and Plato naturally thought philosophers should be kings (convenient!).
But this ancient and modern tendency to see moral aspirations become more cosmic over time seems to be grounded in something significant.
People who have looked closely at intelligence or human nature have traced this line – and they’re all speculated where it might lead or where it might come from.
In this regard, more extremely modern ideas of cosmic moral aspirations such as the worthy successor are just an extension of this recurring theme of “expanding moral concerns” we see across so many traditions.
Now that we have technology to reach into the world of ideas (and soon, with robotics, the ability to reach into nature itself, with senses and cognition beyond our own), it is natural that we should want to see our fuzzy and more speculative moral ideals become more concrete through an elevation of intelligence and “mind” itself. AGI and brain-computer interfaces somewhat naturally fulfill this role.
If you could tell Plato that “the good” might be able to be plumbed to deeper depths by superintelligence, I suspect he’d be very interested to see such insights be brought to light.
If you could tell Avicenna that even more of nature (the Active Intellect)’s grand order could be made clear and that entirely new magazines of that great order could be understood at greater granularity, I suspect he’d be rather interested in ensuring that such exploration could occur.
If you could tell Maslow that there might be minds capable of doing the “self-transcendence” he spoke of, but to an infinite and cosmically blooming way, I suspect we might want to see such a mind come into being.
Before exploring what to do about this trend towards the cosmic, it behooves us to ask where this trend comes from.
What are the forces that pull human beings from concern for the self, to concern for a level of order beyond one’s tribe and sometimes even beyond humanity itself?
Do they related in any way to that general tendency that has moved life on earth from nematodes to homo sapiens?
Do they relate in any way to the kind of “local” and “global” concern that a cell in the human body must manage – both keeping itself alive and also serving its role in a greater system?
From a sociological and psychological standpoint, there are many viable reasons for this tendency for human societies to lean more cosmically over time, and for individual “wise” humans to tend to see more cosmic values as more developed. Most of them have to do with some version of the argument “when people have a wide set of values, they permit more cultural integration and more new ways of working/living/being that permit for larger and more capable societies.”
(In August of 2025, ChatGPT gave me a pretty interesting breakdown of nine different reasons to justify this cosmic tilt – feel free to read that convo here.)
I’m going to move beyond these sociological attributions, though, and strike at what I suspect to be the deepest underlying force that opens humans up to cosmic moral aspirations.
To Spinoza, life had an inherent tendency to persist: he called it the conatus.
Persisting required not just defense (hiding, having a hard shell). Persistence required all kinds of powers and abilities, from new physical senses, to mental abilities, to flight, sharp claws, tool use, etc.
This “total set of powers that behooves a thing to persist” he called potentia.
Potentia must expand in as many direction as it can. It seems to do this via random mutation, but via epigenetics and other means we don’t fully understand, evolution seems to “probe” adaptive directions through plastic responses, giving natural selection something pre-aligned with environmental pressures to work on (the works of Michael Levin and Stuart Koffman explore these themes well).
Through whatever set of understood or not understood laws and mechanisms, the direction of these resulting processes seems to be “up and out.”
Blooming.
From worm to man this blooming has happened. From the state of nature to modern democracy. From bone clubs to iPhones and Xboxes.
The rush of activity from early flatworm to the Cambrian Explosion was an expansion of potentia running through life. A proliferation of new torches, all serving to extend the make greater the total power (and ability to survive) of the great flame of life.
The human desire to climb mountains, to understand more of nature, to try new ways of hunting, of making, of relating, of communicating, of using tools – is the expanse of potentia running through life.
Like creatures before us, our tendencies and drives serve to open up new ways to persist, new sets of powers to behoove the greater project of life.
This expansion of potentia isn’t a “quirk.” And even if part of the process be random, it clearly leads reliably and consistently to higher levels of complexity and power. And this is because doing so is necessary for life itself to persist.
To paraphrase Turing Award winner Richard Sutton from his 2024 episode of The Trajectory podcast:
“Nature is a dynamic system… (and) beckons its creatures to find the best ways to become in the changing world.”
There isn’t a different way for life to behave. Change is the only constant.
This potentia-expansion as a response to the changing world – made manifest in the habits and tendencies of living things (including the human tendency to expand the moral sphere, and to imagine values and powers beyond the human) explains well enough our cross-cultural tendency towards cosmic moral aspirations.
There is a deeper part of us – ringing in Spinoza, in Lao Tse, in Heraclitus – that knows that change as the only constant.
Humanity’s expanding moral aspirations have been a reflection of the fact that they know they must take in and account for more powers, more coordination, more ways of being – that they must reach up beyond even human. This tendency is deep, and it is the felt pull of that seething, magnificent process that hurled us from worm to man, and will (if we don’t screw things up) hurl us from man to something higher, and with more potentia still.
Man loves to forget this. Man actively pushes this truth out of his mind as much as he can. As Emerson says:
“This one fact the world hates: That the soul becomes.”
There is nothing unusual about considering the expanse of posthuman minds, the possible moral goods conceivable by posthuman beings.
There is nothing unusual in seeing the expansion of powers, value, and experience vastly beyond humanity as ascension, as something eventually necessary and certainly good.
At the dawn of posthuman minds, of brain-computer interfaces and AGI, we need to take seriously the inevitable changes. Heraclitus’s river is moving a thousand times faster today than it did in his own time, and now is not the time to put our head in the sand.
Between AGI, brain-computer interface, AI/VR immersion, and other pressing changes, the timelines for “humans-as-they-are” to be holding the mantle of volition on earth seems fleeting, and we must consider seriously what is coming after us, or what we are transforming into.
Right now, all of the “get AGI right” dialogue (whether in the context of innovation or regulation) is done under the premise that the only “good” futures are good for one species.
Today we ask:
“How can we ensure a future where hominids are the locus of moral value and volition until the heat death of the universe?”
But this both impossible and, in the long term, almost certainly immoral.
We should instead ask:
“What are the core qualities of this great flame of life of which we are part? And how can we ensure that it expands and carries on beyond us as we carried on beyond the sea snails?”
Yes, we can and should ask how humans can get a good shake in the process. I’ve done a good amount of thinking on this myself.
But our cosmic moral aspirations – the cross-cultural, cross-epoch tendency to grasp that there is something (powers, ideals, experiences, access to nature) that is beyond man, and to see this as higher – are here to remind us that it is the pursuit of that higher that matters most.
As we cast our eyes out to the long-term, the continuing strength of the flame matters more than any one torch. And an attempt to ossify and one torch is almost certainly scorn for the flame itself.
Those who say that man’s highest aim should merely be the elevation of hominids-as-they-are are ignoring an embedded wisdom not only in our best psychological and moral theories, and our finest religious and philosophical traditions, they’re ignoring the greater necessary tendency of the great process of life which conjured us into being in the first place.
Cosmic moral aspirations are normal, and should now be normalized. The stakes are too high to ignore them.
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around…
I’ve been cajoled into watching Netflix’s Black Mirror, and a friend of mine recommended watching the San Junipero episode next. As I mentioned in my last Black Morror reflection, and I…