Reflection on John Harris’s “Enhancement Are a Moral Obligation”
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
Like it or not we humans share the fate of all forms (individuals, species, substrates):
To transform or be destroyed.
I suspect we have from 15 to 40 years before humans-as-they-are are (a) no longer the main force of volition in our solar system, or (b) we are knocked down to vastly lower levels of development (full article: Short Human Timelines).
We don’t have the option to freeze an eternal hominid kingdom, and even if it were possible, it would likely be deeply immoral.
Questions beckon to be answered:
When we think of our own deaths, most of us to not resign, we act in order to make a difference in the future (through sciences, the arts, our communities, friends), even if we won’t occupy that future.
Seeing eventual attenuation or humanity itself should similarly not be a trigger to resign, but to act in order to achieve better futures – even if humans won’t occupy those futures at all.
In this brief article I’ll lay out what I consider to be the two final imperatives for humanity:
Within the section focused on each of the two goals, I’ll lay out what I consider to be three of the most important sub-goals of each major goal, and I’ll also provide some links and references to people or organizations doing important work on those sub-goals today.
To close out I’ll share some of my thoughts about how individuals can contribute to these goals and hopefully influence the trajectory of life for the better.
(These six total sub-goals stand as of July 2025 when this article is being published. Over time I’m sure these goals will evolve as new challenges and opportunities emerge.)
We should slow down the AGI arms race via international coordination so that we don’t hurl an unworthy successor into the world.
Some think tanks have given thought to this priority (CIGI, Narrow Path), and I’ve written a good deal on this myself over the last decade (SDGs of Strong AI, International AGI Governance – Unite or Fight). Unfortunately it seems likely that at least in the West, the populace at large probably has to be woken up to AGI risk before any politicians care (here’s why that is).
Whether its influencing the populace to incite a political singularity, or directly appealing to national or international policymakers, its crucial to slow the AGI arms race and finding a kind of international coordination that will (a) allow us to ensure that what we conjure is actually worthy before we hurl into the world, and (b) avoid total global totalitarianism. Arriving at such a minimally cooperative set of agreements is absolutely paramount. Some potestas is needed to balance the potentia of unbridled (and completely not understood) early AGIs.
I define the Political Singularity as:
A point at which global and national politics will center entirely around the questions related to the (a) creation and (b) control of posthuman intelligence.
It seems safe to say that AGI risk is reduced get people (particular regular citizens) in the USA to see uncontrollable AGI as dangerous, and not worth rushing towards. Attention must be drawn to the arms race dynamics at play, and the consequences of conjuring uncontrollable entities with vastly more intelligence and a greater range of powers than human beings.
This might involve:
Demonstrations of this kind are not as compelling as an actual AI disaster. But an AI disaster risks (a) killing us all, or (b) accelerating the AGI arms race (full article: AI Disasters – Uniting vs Dividing), so we should heavily prefer this kind of demonstration or media influence approach first.
National or global coordination to slow down the reckless AGI arms race is almost certainly not going to happen without a significant bottom-up (see: Why No One Talks about AGI Risk).
Note: It may be more possible within China than the USA to simply convince the leading party of AGI risk, without needing a groundswell from the populace – as China’s government doesn’t need to obey the whims of its people to the same degree as in the USA. That said, a groundswell in China would still have an influence on the CCP.
There are many, many ways that the brute arms race around AGI might be slowed slightly. None of them are guaranteed, but all seem worth trying:
Obviously, a part of this objective involves making AGI safer in the near term. I’m not advocating for eternally shackling AGI to human aims (anthropocentric alignment), but working on other kinds of initiatives that might give us a higher chance of not birthing an unworthy successor, such as:
There are many destructive and transformative forces that might end humanity in the near-term, including many forces that have nothing to do with AGI:
We must invest in understanding sentience and autopoiesis and so we strengthen the flame of life itself.
As of today, there is shockingly little effort to study either trait.
There are some current efforts to study consciousness, and maybe a handful studying autopoiesis (thought certainly not in a cosmic or moral sense) – but it all boils down to what amounts to something like 0.0001% of the total investment of attention and capital into AGI capabilities. The percentage should not be so low.
I suspect that these areas of inquiry are ignored almost entirely for the following reasons:
But if what we are building will carry on beyond us – we must ensure that these new entities be worthy, and we must invest in understanding and creating these worthy traits in our successors.
We are gaining more insight into how nature seems to harness entropy and create order – but its clear we’re only scratching the surface.
The ability for an embryo to develop depends on vastly more than just its DNA. And the impetus to self-organize and create new order and new structures existed before DNA, before RNA. But there does this impetus come from – and how could we ensure that future systems have the ability to expand potentia?
Consciousness is a field of study where we seem to get further from a good theory the more theories we develop, and the more we try to observe them. It’s a shame we know so little about the most unquestionably morally relevant quality (sentience) we now know of.
Despite these challenges, it behooves us to focus huge efforts towards understanding consciousness, in order to ensure that the non-biological systems we’re building beyond ourselves are able to carry forward the light of consciousness.
This might involve:
Intelligence is vastly more complex than most people presume. Early AI researchers presumed that turning a wrench or catching a baseball would be laughably easier than writing poetry for an AI, but this turned out to be false. There’s more to intelligence than playing chess or writing poetry. Intelligence exists beyond the central nervous system, and beyond DNA.
In order to gain deeper insight into what this process is, we might:
Eventual human attenuation needn’t disempower us.
It might in fact empower us to our greater role of being a crucial catalyst in the middle of the stream of life – one with a great responsibility to life going – even if it be not always merely hominid life.
So you want the future to go well – how do you get involved?
Not all of us can be scientists making breakthroughs, or academic institute leaders – but we can contribute in meaningful ways.
Here’s a handful of ways to contribute to any of the imperatives:
There’s also plenty of work to be done in thinking of new goals and new kinds of initiates to help ensure that the flame is stewarded forward well.
We might wish that our final humans goals would contribute to the eternal kingdom of humanity.
But they won’t, because they can’t be. Our final contribution is to steward forward the flame of life itself, a flame that – if all goes well – should blaze up beyond us in power and experience and understanding and abilities as far beyond humanity as humanity is beyond the sea snail.
“This one fact the world hates, that the soul becomes.”
But we should not hate this reality, but embrace it. For we have no other choice. The eternal hominid kingdom is not possible (or best, even if it were possible).
We have this last opportunity to cast our ingredients into the cauldron of swirling change before it almost certainly boils us all into something that won’t involve “us” anymore – and will (even in a best case) mostly exist in ways completely beyond our own conception.
These final human hours (however many we have left) should be full of volitional action, full of purpose and meaning and enthusiasm – for we are not going down with he ship, we are (if we succeed) passing the baton of blooming and beautiful life itself.
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
Ideals Have Taken Us Here Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?…
In the coming decades ahead, we’ll likely augment our minds and explore not only a different kind of “human experience”, we’ll likely explore the further reaches of sentience and intelligence…
1) Augmentation is nothing new Until recently, “augmented reality” seemed to only have a place in video games and science fiction movies. Though at the time of this writing, “AR”…