As Much as Humanity Can Hope for in the Long Term

As human beings we are propelled by our varied hopes, our lives are in many ways driven by them.

On the day-to-day, we hope to make the sale, to avoid traffic, to feel better after a good night’s sleep.

In the context of our lifetime, we hope to finish our book, sell the company, retire happily, to avoid divorce and maintain happiness with a spouse, to move somewhere warm and beautiful, achieve some modicum of renown in our field.

In the context of humanity, we hope that great powers will avoid nuclear war, that we will handle the environmental crisis, that the quality of life for all people will improve.

But what are our hopes beyond this? What can we hope for in the more distant future?

The United Nations’ Sustainable Development Goals are a nice start for reaching humanity’s near-term and mid-term aims. But for what reason? Let’s say, in 40 years, far fewer people live in poverty, depression is decreasing, human rights are more broadly respected around the world, and peace is maintained between great powers.

So what?

As a species, it’s important for us to consider this “so what”, and to think about the most worthy distant aims that we could hope to achieve through the progress of technology and political order.

In this article I’ll argue the following:

  • For humanity, our highest hope is to upload our individual consciousness into another substrate to experience millions of years of simulated and interesting bliss.
  • For life itself, our highest hope is to build a worthy successor AGI, millions of times more capable, powerful, aware and intelligent than we – who can populate the galaxy and keep the torch of life alive.

We’ll start with ourselves:

1 – Humanity

The most that we can hope for as an individual “self” would be to persist, happily. Life extension would be a start, but by no means adequate in the long-term. 200-year old humans with our moral failings, our limited memory and ability to learn, our infinite varieties of neuroses and flavors of suffering – simply wouldn’t be a reasonable long-term hope.

Viable options for improving the state of the individual consciousness, and ensuring its survival include:

  • Simulated Heaven: A mind-uploaded blissful existence in a substrate that can hypothetically  This “happiness” would look nothing like the happiness we now experience.
  • Merger: Combining a human consciousness (probably one that is already uploaded) into a greater AI superintelligence.

“Heaven” is the hope of so many great religions, and for good reason – life is hard, and wish to be free from limitation and from all suffering is universal.

The highest hope for the self is essentially the expansion of blissful experience, of knowledge, of expression – in an unending universe all one’s own. This scenario is described in an essay titled The Epitome of Freedom.

It should also be noted that even in a mind-uploaded “heaven” state, the entity that manages the substrate being used to house an individual human consciousness is unlikely to spend energy maintaining those individual sentiences forever, and sooner or later this “heaven” will end when the entity decides to use the substrate for other purposes, or simply refrains from devoting energy to its maintenance. This scenario is described is an essay titled Digitized and Digested.

While those who go into this state of bliss will be doing so in order to escape the state of nature, they will in fact still be in the state of nature, and the entities or beings that control the hardware into which humanity is uploaded will wield untold power over all of the (read: Substrate Monopoly).

I have argued (in the Alignment Isn’t a Problem if We Don’t Coexist) that future post-human intelligences and world simulations are unlikely to be compatible with each other – and divergent intelligence types would be unlikely to get along, and probably should not be permitted to interfere or in any way harm one another. Even interacting may not be necessary, as vastly more rich modes of enjoyment and learning will emerge beyond human relationships as we now know them.

It should be noted that merger scenario seems unlikely – as individual sentience seems unlikely to remain distinct in such a scenario – and so a “turning off” of the self (read: literal death) would be the likely experience of the individual. If this was the case, few people would wish for such a merger. Some interesting Twitter debates spun up on this “merger” topic when I posted about it – check them out in the post below:

2 – Life Itself

The “so what” question is much more pressing when extended from individual lives to life itself.

The preservation of present earth species, or the satiety of humanity, cannot be the greatest, distant aim. In a more distant future, hoping for security on earth isn’t possible. Not only will species inevitably die off and others develop, but eventually earth will no longer be habitable.

The highest hope that humanity can set for life itself is the expansion of intelligence and sentience into new, greater, higher, more expansive and survivable forms.

In my essay Worthy Successor, I define the term this way:

Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.

Kurzweil writes about “the universe waking up” in his book The Singularity is Near, and it seems that indeed this is about as lofty a long-term aim as we could hope for: To come together and influence the grand trajectory of life itself.

With superintelligent post-human intelligence, life might:

  • Explore and expand new and tremendous kinds of pleasure, bliss, and positive qualia – creating radiant and powerful – diverse and expanding new forms of utilitronium
  • Understand more of nature in order to discover how to travel faster than light, not to productively use resources on varied planets, etc.
  • Explore the nature of value and “goodness” – beyond limiting, hominid-conceived notions of utilitarianism and expanding into vastly more beneficial modes of valuing that humanity couldn’t possibly conceive of (read: Finding the Good with AGI)
  • Escape into other universes in order to avoid the heat death of the current universe (Dr. Kaku is one of the few thinkers who addresses this point as part of the post-human trajectory)
  • (And much, much more than humans can’t possibly imagine, just as early rodents couldn’t possibly imagine the hopes, goals, and capabilities of humanity.)
In other words, of all the future intelligence scenarios from Max Tegmark‘s Life 3.0 book, only “Decendents” is actually viable in a long-term time horizon. We can orchestrate – at best – a short and tentative period where AGI will “serve” human aims before intelligence expands and explores.

Conclusion

When I speak with AI researchers, policymakers, or futurists about preferable long-term visions for humanity, I rarely hear anything post-human mentioned. It is not uncommon to hear:

  • A powerful artificial general intelligence that maintains different nations and cultures, sustaining the rich cultural heritage of different groups, and ensuring peace between them.
  • Brain-computer interfaces that augment human beings with greater empathy, so that we can all treat one another and animals with more compassion as a default.
  • Life extension technologies that allow people to enjoy health, vigor, and productive life to hundreds of years.

Given a long enough time horizon, however, none of these anthropomorphic visions seem relevant.

The moral consequence (in utilitarian terms) of nominally happier humans living longer on earth is – in the grand scheme – nominal. And there are probably higher goods than utilitarianism. To a gigantic AGI mind, utilitarianism may just be fries on the pier.

Lucretius was right, nothing is permitted to stay the same – and humanity, human values, and even earth itself are destined to be destroyed and turn into something else. This does not mean that we should welcome destruction, or rush off to move beyond humanity.

It does, however, mean that when we think about our grand questions of “why”, we must consider what the best-case scenarios in the long-term might be, particularly if we want to develop technology and governance structures that might lead us in that direction (read: The SDGs of Strong AI).

It seems to me that our highest hope as individuals is our perpetual existence in expansive bliss, and our highest hope for life is to steward forward intelligence/potentia itself – bringing about a successor truly worthy of carrying the torch of life into the cosmos.

 

Header image credit: Engadget