Handing Up the Baton – The Best of the Four Human Futures

Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories:

  • Impossible: 
    • “I want to be Superman!”
  • Possible: 
    • “I want to be a fireman like my dad!”
    • “I want to be an astronaut!”
  • Detrimental:
    • “I want to play video games and hang out with my friends!”

Given the rapid and drastic changes humanity is about to undergo (and is currently undergoing), from the increased digitization of our experience, and the advancements of artificial intelligence, it behooves us to think seriously about where we want to be as a species in a handful of decades.

Many leading politicians, AI researchers, and AGI lab leaders are congenial to the idea of some degree of global governance around AGI.

But what destination do we want to arrive at in 20 or 200 years?

Ask that question to adult humans today and you’ll get just as many impossible and detrimental answers as you would with 7-year-olds.

Given how powerful these technologies are – we should be able to look frankly at the impending changes that AGI and emerging technologies will imply – and pick among those that are viable – instead of putting our head in the sand and wishing for comforting and impossible, or detrimental futures.

Unrealistic Desired Futures

Here are two of the most commonly expressed impossible or detrimental outcomes for humanity:

1. Eternal Hominid Kingdom. Humans, or slightly upgraded transhumans, rule a handful of planets and space colonies, possibly maintaining some kind of global alliance.

2. Gradual Biological Evolution from Homo Sapiens. Humans continue evolving biologically without any technological integrations. Biology continues to do what biology does. For millions of years tech remains merely a tool.

I’d argue that both of the above scenarios are unrealistic and ridiculous because:

  • The motives of both pleasure and power will mandate ravenous BCI / AI adoption: People will adopt whatever technology helps them to reliably achieve their goals or fulfill their reward circuits, even if it seems monstrous or godlike to previous generations. Our own human drives will compel us to move towards a virtual-first human condition and a merger with technology, towards brain augmentation (for pleasure or power), and for an increasing reliance and leverage of stronger and stronger AI tools (Read: Your Dystopia is Myopia).
  • The “real” and “biological” aren’t sacred: Today’s first world 4-year-olds, many of whom have lived their lives 6 inches from an iPad since birth, do not consider the biological or “real” to have any sanctity. They’ll gladly upgrade their minds, have AI friendships / romantic partners, exist in primarily virtual spaces, and do other things that seem as monstrous to us today (I write this in May 2024) as our current lives would seem to our grandparents back in 1960 (imagine explaining to grandma that in the future her grandchild would spend 14 hours a day on screens). Norms around virtual relationships (and life) and the erosion of the “sacredness” of the “real” (i.e. not virtual) will again, I suspect, lead quickly to posthuman levels of tech merger.
  • Too many technologies advancing at once: Let’s just say, for some strange reason, that large language models and current BCI approaches are total dead ends. No more progress, no additional capability. Even if that were so – there are so many kinds of AI innovation (hardware, software, etc), and so many developments relevant to neurotech and nanotech, that it would be impossible to stop them all from rolling forward. Increases in intelligence and capability are coming from more directions than ever.
  • All things attenuate or turn into something else: There is no “pause” button on evolution, or on the natural world. All species that come into existence either attenuate or turn into something else. Lucretius was right. Given a long enough time horizon this is certainly the fate of humanity as well. We needn’t rush to this outcome in a foolhardy way, but we should also be careful not to impact the trajectory of life in a way that would impede the future blooming of potentia.

There may be strong arguments as to why humanity should hold off on building AGI in the immediate future, and there are probably many reasons why we should hold off on reckless brain-computer interface adoption without more careful testing. But “hold off on these developments forever,” isn’t on the table. 

There is too much benefit, too many balls rolling forward already, and too many reasons why human nature will leverage these technologies.

The futures that are viable imply changes that are unfortunately uncomfortable for most people to grapple with. Any sufficiently long-term positive future eventually implies the attenuation of humanity. Maybe it takes 1000 years. Maybe it takes 100. Maybe 10. 

But the “eternal hominid kingdom” future is outlandishly unlikely, and probably not morally best.

In the 5 Stages of Grief terminology, we must move from Denial to Acceptance. The majority unrealistic end-games spring from Denial and Bargaining:

Posthuman Acceptance - Stages of Posthuman Grief

The Four Viable End Games for Humanity

In the long-run, there are only four futures we can reasonably hope to move towards – and only the last one could be argued to be beneficial to life.

Two options to attenuate:

1. Extinction from non-AGI-related causes

  • Super-volcano
  • Asteroid
  • Nuclear war (unrelated to the AGI race)
  • A global pandemic (man-made or natural)
  • The death of our sun

2. Extinction from AGI or AGI-related causes

  • AGI itself kills us all (through malice, neglect, etc).
  • Humans (probably the USA and China) go to nuclear war over the AGI race.
  • Strong AI permits bad actors to create and use new, powerful weapons (biological / otherwise)

Two options to transform:

3. Unworthy successor

  • AGI takes over, kills most earth-life, but sputters out. Net destructive.
  • AGI expands without the light of consciousness.
  • AGI expands, but optimizes for something arbitrary / limited.

4. Worthy successor

  • AGI expands into the galaxy, carrying the torch of life and consciousness, blooming into ever higher capabilities (potentia) and goals

I suspect that only the Worthy Successor outcome is a beneficial outcome – and that it is a moral imperative for us to eventually reach such a successor (here’s the full argument as to why I believe this).

Unfortunately, a Worthy Successor is not guaranteed to respect or take care of humanity – and there are many smart thinkers who suspect that it’s very unlikely for us to control AGI either now, or in the future.

Policy Considerations 

We can’t “freeze time” and stop the forces of change (climate change, technological development, demographic collapse).

Year by year (or at the very slowest, generation by generation), our experience will become more and more virtual, and our brain-augmented humans and our strong AI will become more and more powerful. 

We also can’t ensure that artificial general intelligence will maintain human survival or wellbeing – or even consider us at all in its plans and aims.

I therefore  posit the following concluding recommendations around policy discussion regarding the future of AGI and brain-computer interface:

1. Accept Inevitable Posthumanism: We should accept that creating an entity with the capability for higher goals and actions than humanity cannot be guaranteed to value humanity – but also accept that an eventual post-human trajectory is inevitable. 

Policy discussions that assume an eternal hominid kingdom (a world where AI is merely a tool, forever) are fantasy, and ignore the inevitable forces that will change human nature, and the nature of the posthuman intelligences that are bubbling up through humanity and coming into being now.

This doesn’t mean we should rush to post humanism, or that there shouldn’t be guidelines put in place. There should be. But this will require point “2” below.

2. Coordinate Internationally Around AGI Trajectories: The current state of affairs (as of the time of this writing, May 2024) is one of no international AGI governance at all. 

Because so many massive waves of technological change are plowing forward at the same time, there is an outright AGI arms race between the major AGI labs (Meta, OpenAI, etc) and the great powers (USA and China). This dangerous condition of “if I don’t build it, they will, so I must build it even though I know it’s dangerous” is due to the fact that there are no international rules for what futures humanity should or should not more towards regarding posthuman intelligence (I argue that such coordination is the only alternative to an arms race).

So if you want AGI progress slowed – either because you want to chase the “eternal hominid kingdom” dream, or simply because you don’t want a military AI arms race armageddon in the coming decade – you should be advocating for serious global governance now.

This might – as I lay out briefly in my longer Emerj article The SDGs of Strong AI:

The SDGs of Strong AI

3. Explore Options for Preserving / Extending Human Consciousnesses: Because posthuman intelligences are somewhat inevitably destined to rule the future, and because we cannot ensure that such intelligences with treat humanity well, we should consider ways to extend the survival and ensure the wellbeing of current human consciousnesses.

It may be viable for humanity to “merge” with technology via brain-computer interface as part of the path to a worthy successor:

  • (a) extend our capabilities as a species, in order to keep up with increasingly intelligent machines
  • (b) help to kindle the intelligence explosion that might result in a Worthy Successor, and 
  • (c) as an attempt to become part of the great posthuman intelligence itself (possibly keeping our consciousnesses alive in some way as posthuman intelligence expands into the galaxy).

It may also be possible to upload human minds (Kurzweil’s Ship of Theseus approach is a useful thought experiment here) into non-biological substrates to simulate a trillion years of expansive blissful experience in what might only be an actual earth-hour (see Sugar Cubes).

This way, even if AGI decides to do something more useful with the compute resources that is housing those human consciousnesses, a single hour would suffice to give each human consciousness more blissful experience than they could ever have in a billion lifetimes (not a bad way to bow out). This kind of uploading might also help to train the AGI and develop into a more capable and better worthy successor.

Closing Notes on Risk and Opportunity

In order to think critically about the long-term direction we want humanity to take – and consider the best and most viable paths for innovation and regulation – we must begin by letting go of childish or impossible ideas.

We could see this time as tragic, as it will almost certainly be the J-curve of technology that ends the reign of unaugmented homo sapiens as the most powerful species on the planet.

There are great risks before us – but also great opportunities than life has ever before known. Bargaining and Depression don’t get us far, but Acceptance does.

The earth now is burning much brighter with the torch of life than it was 10B years ago. There are more species, and more creatures with larger brains and greater sentient ranges, capable of more ideas, goals, achievements, and mastery of nature.

The goal is to walk this path well enough to ensure that 10B years from now, the universe itself is still lit — and more lit up with the torch of consciousness, life, potentia

That intelligence beyond us can discover what this universe is all about – and can continue to live and create and discover in ways that are as unimaginable to us as our activities are to earthworms.

The baton has been passed up to us.

And while we cannot grip the baton forever – we can pass it up again.

This time – maybe – deliberately.