A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them.
On the day-to-day, we hope to make the sale, to avoid traffic, to feel better after a good night’s sleep.
In the context of our lifetime, we hope to finish our book, sell the company, retire happily, to avoid divorce and maintain happiness with a spouse, to move somewhere warm and beautiful, achieve some modicum of renown in our field.
In the context of humanity, we hope that great powers will avoid nuclear war, that we will handle the environmental crisis, that the quality of life for all people will improve.
But what are our hopes beyond this? What can we hope for in the more distant future?
The United Nations’ Sustainable Development Goals are a nice start for reaching humanity’s near-term and mid-term aims. But for what reason? Let’s say, in 40 years, far fewer people live in poverty, depression is decreasing, human rights are more broadly respected around the world, and peace is maintained between great powers.
So what?
As a species, it’s important for us to consider this “so what”, and to think about the most worthy distant aims that we could hope to achieve through the progress of technology and political order.
In this article I’ll argue the following:
We’ll start with ourselves:
The most that we can hope for as an individual “self” would be to persist, happily. Life extension would be a start, but by no means adequate in the long-term. 200-year old humans with our moral failings, our limited memory and ability to learn, our infinite varieties of neuroses and flavors of suffering – simply wouldn’t be a reasonable long-term hope.
Viable options for improving the state of the individual consciousness, and ensuring its survival include:
“Heaven” is the hope of so many great religions, and for good reason – life is hard, and wish to be free from limitation and from all suffering is universal.
The highest hope for the self is essentially the expansion of blissful experience, of knowledge, of expression – in an unending universe all one’s own. This scenario is described in an essay titled The Epitome of Freedom.
It should also be noted that even in a mind-uploaded “heaven” state, the entity that manages the substrate being used to house an individual human consciousness is unlikely to spend energy maintaining those individual sentiences forever, and sooner or later this “heaven” will end when the entity decides to use the substrate for other purposes, or simply refrains from devoting energy to its maintenance. This scenario is described is an essay titled Digitized and Digested.
While those who go into this state of bliss will be doing so in order to escape the state of nature, they will in fact still be in the state of nature, and the entities or beings that control the hardware into which humanity is uploaded will wield untold power over all of the (read: Substrate Monopoly).
I have argued (in the Alignment Isn’t a Problem if We Don’t Coexist) that future post-human intelligences and world simulations are unlikely to be compatible with each other – and divergent intelligence types would be unlikely to get along, and probably should not be permitted to interfere or in any way harm one another. Even interacting may not be necessary, as vastly more rich modes of enjoyment and learning will emerge beyond human relationships as we now know them.
It should be noted that merger scenario seems unlikely – as individual sentience seems unlikely to remain distinct in such a scenario – and so a “turning off” of the self (read: literal death) would be the likely experience of the individual. If this was the case, few people would wish for such a merger. Some interesting Twitter debates spun up on this “merger” topic when I posted about it – check them out in the post below:
"At a certain point humans will merge into the AGI, and we'll live forever for sure."
Brother, what makes you believe that your sentience has any value to a god-level AGI?
What in you could possibly be worth salvaging?
Do you with to salvage/merge with the minds of ants?
— Daniel ‘No, Brother’ Faggella (@danfaggella) December 24, 2023
The “so what” question is much more pressing when extended from individual lives to life itself.
The preservation of present earth species, or the satiety of humanity, cannot be the greatest, distant aim. In a more distant future, hoping for security on earth isn’t possible. Not only will species inevitably die off and others develop, but eventually earth will no longer be habitable.
The highest hope that humanity can set for life itself is the expansion of intelligence and sentience into new, greater, higher, more expansive and survivable forms.
In my essay Worthy Successor, I define the term this way:
Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.
Kurzweil writes about “the universe waking up” in his book The Singularity is Near, and it seems that indeed this is about as lofty a long-term aim as we could hope for: To come together and influence the grand trajectory of life itself.
With superintelligent post-human intelligence, life might:
When I speak with AI researchers, policymakers, or futurists about preferable long-term visions for humanity, I rarely hear anything post-human mentioned. It is not uncommon to hear:
Given a long enough time horizon, however, none of these anthropomorphic visions seem relevant.
The moral consequence (in utilitarian terms) of nominally happier humans living longer on earth is – in the grand scheme – nominal. And there are probably higher goods than utilitarianism. To a gigantic AGI mind, utilitarianism may just be fries on the pier.
Lucretius was right, nothing is permitted to stay the same – and humanity, human values, and even earth itself are destined to be destroyed and turn into something else. This does not mean that we should welcome destruction, or rush off to move beyond humanity.
It does, however, mean that when we think about our grand questions of “why”, we must consider what the best-case scenarios in the long-term might be, particularly if we want to develop technology and governance structures that might lead us in that direction (read: The SDGs of Strong AI).
It seems to me that our highest hope as individuals is our perpetual existence in expansive bliss, and our highest hope for life is to steward forward intelligence/potentia itself – bringing about a successor truly worthy of carrying the torch of life into the cosmos.
Header image credit: Engadget
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…