Two Questions – All That Humanity Ultimately Should be Concerned With

The human race has two critical questions to answer in the 21st century:

1. What is a beneficial transition beyond humanity?


2. How do we get there from here without destroying ourselves?

These questions presuppose the following hypotheses:

  • Humanity will likely develop cognitive enhancement technologies, expanding our minds and altering our emotions, memories, personalities, and more. These technologies are likely to be widely adopted once developed (Nayef Al-Rodhan: inevitable transhumanism).
  • Humanity will likely develop artificial superintelligence within the 21st century (read the Emerj FutureScape Timeline: “When Will We Reach the Singularity?“)
  • Humanity, like all species before it, is on its way to (a) extinction, or (b) transforming into something beyond itself. There is no option to remain homo sapiens forever.
  • Humanity is selfish and amoral, and humans or human groups will act with violence or deception whenever it behooves them to do so (Nayef Al-Rodhan: emotional amoral egoism).

The grand challenge will be in uniting the world in determining and moving towards the most beneficial path beyond humanity. This I refer to as the Cause.

At present, almost nobody cares about these questions. Politics, the environment, and the economy are much more pressing near-term concerns – and indeed – we will need society to stay intact with reasonable peace and prosperity to move towards any other kind of progress.

Peace and prosperity, though – to what long-term end?

It is absurd to presume that human beings – as we are today – will be the most dominant species in the known galaxy in a million years from now. Many PhDs believe that it is absurd to presume that humans will be the most dominant species in the known galaxy in 60 years.

So what is this all for?

We wish for peace, for prosperity, for all of the United Nations’ Sustainable Development Goals (SDGs) – but why?

Some will argue that the goal of humanity – in the face of all history and evolution – will be the maintenance of homo sapiens as they are – for millions of years or indefinitely. This will especially be the case in the early days of the transhuman transition, when wide swaths of humanity will see AI and neurotechnologies as foreign and wrong.

Some will argue – as Lucretius did – that all things either die off or become something else – and that no species or entity can remain the same, and that this change is unavoidable.

As more and more people – particularly young people – see these technologies as part of daily life, the sanctity of “humans as they are” will fade rapidly and give way to a new norm: That consciousness and experience are pliable (see my full essay on this topic: “Human Ideals Will Tear Us From Humanity”).

Others will argue – as I do, – that change is inevitable, and that neurotechnologies and AI will bring about rapid change, and that humanity ought to guide and bend this change in directions that are likely to avoid Armageddon, and promote the wellbeing of sentient things (biological or nonbiological).

The conflict between these two groups may literally lead to war as control of AI and neurotechnology becomes the only politically relevant question on earth. I hope this doesn’t occur.

There are thousands of potential means of transition beyond humanity, including:

  • Wireheading and keeping the body alive as a mere biological cocoon for the brain
  • Freely allowing cognitive enhancement (to memory, emotion, creativity, etc) for all human beings to become what they life (I have argued this scenario will lead to great conflict)
  • To build and merge with artificial superintelligence
  • To prevent artificial superintelligence from being built, but instead, upload all human minds into expansive and blissful virtue worlds (“Epitome of Freedom”)
  • To bow out nicely and live out the rest of our pleasant human lives as superintelligent and super-sentient machines take over earth and populate the galaxy.
  • etc…

It behooves humanity to bring these questions to the national and global stage, and to think beyond the election cycles to ask ourselves:

1. What is a beneficial transition beyond humanity?


2. How do we get there from here without destroying ourselves?

If humanity (“team humans”) can imagine a trajectory for itself – we may get along in order to achieve it, and to prevent the rather natural arms race dynamic of technological development – which would probably mean war if it was done with AI and cognitive enhancement.

Once all of the UN’s SDGs are fulfilled, we will still be generally anxious hominids, trapped by the hedonic treadmill and yearning for something else, something more. Sustained fulfillment and robust understanding of the universe aren’t possible in the human form – and should peace and prosperity reign, man would remain unhappy (the vessel is flawed).

This is not to say that SDGs shouldn’t be achieved, and that it wouldn’t be a net benefit to almost all life on earth if they were – it probably would be a net good. But to what end?

Sooner or later we will have to decide.

While some of these technology transitions might be decades away, I think it makes sense to explore our options sooner is better than later.


Header image credit: Statue of Socrates, National Academy of Athens