[If that title comes across as shocking or offensive, bear with me. Before you make assumptions about my opinions on post-human morality and moral stratification, please read the article.]

I was recently reading an article by Nayef Al-Rodhan called The Sustainable History Thesis – A Guide for Regulating Trans and Post-Humanism, and the essay brought up a number of topics I’ve been meaning to write about. Namely, the consequences of cognitive enhancement and moral worth, and how the post-human transition might be regulated.

I’ll use some of Al-Rodhan’s quotes throughout his article as jump-off points to explore these subjects individually.

Sufficiently Enhancing Cognition Implies Enhancing Moral Worth

“Any hierarchical understanding of humanity entails the denial of certain kinds of recognition and the possibility of a future of post-humans imposes such hierarchy by its very nature.” – Nayef Al-Rodhan, The Sustainable History Thesis

Nayef suggests that enhanced people should not have more moral worth than unenhanced people.

I argue that people will be more morally valuable when they are cognitively enhanced. If you pick the traits that make human beings more morally valuable, then 10x those traits in a person, that person is worth more (morally) than another person.

If I ask you to list morally valuable traits that make you (a human, I presume) and your friends more morally valuable than a rottweiler, you might say:

  • “I have a greater sense of self and purpose than a rottweiler”
  • “I have a greater ability to relate to and love other living things”
  • “I am more useful in my ability to create and work in varied and productive ways”
  • etc…

There isn’t a single point that could be listed, however, that as some hypothetical point in the future couldn’t be replicated (100-fold moreso) in a machine.

I have asked my audience to go through this same thought experiment in my first two TEDx presentations, titled Tinkering with Consciousness and What Will We Do When the Robots can Feel? – I’ve embedded that part of the latter talk below:

Astronomically enhanced human beings will – rightfully – be “worth” more than unenhanced human beings. This will be in an objective sense (i.e. they have more of the morally valuable “stuff” than regular people), and in a pragmatic sense (they will be the ones running society and the world).

Also – just as monkeys could never have imagined the types of moral qualities that evolved in humans and made them valuable, we cannot possibly imagine (and may not be capable of understanding) the valuable permutations of moral or mental qualities that post-humans will develop (read: AGI and Finding the Good). To suspect that current human values alone will persist in a post-human world is as ridiculous as suspecting that chimpanzee values would persist and regulate human society.

Once post-human minds exist, and post-human morality – it is extremely unlikely that human beings will remain “valuable” in the eyes of post-human entities for long (read: On Morality in a Transhuman Future).

Humans who are only slightly enhanced (e.g. slightly better memory, slightly better spacial awareness, or vastly clearer eyesight, etc) will likely be held to the same standard of moral worth as humans, pragmatically. Once drastic and vast enhancement occurs – leaving humanity dramatically behind the new post-humans in cognitive and physical abilities of all kinds – our harmonious notion of equality is unlikely to hold up.

There is no reality where chimps are on the same moral level as humans (despite having only 3-5% genetic difference from humans), and the same can be said of extremely cognitively enhanced humans.

Imagine an enhanced person who:

  • Can duplicate his/her mental processes to work on 20-30 problems or thought patterns at one time
  • Can connect to the internet to instantly pull down any kind of information, with a perfect memory
  • Has an emotional range that is 10X greater than humanity (with wide sweeping variations of pains and pleasures that humans can’t image)
  • Has the ability to communicate instantaneously with any technology device, without needing a keyboard or other manual interface
  • Has the ability to create scientific breakthroughs that are as far beyond human science as the human mars mission was from the “scientific” understanding of chimpanzees
  • Is capable of empathizing with any or all living things, taking into account the wellbeing of all living things in it’s actions and plans
  • Has a variety of rich and morally worthy qualities that human beings cannot imagine (just as chimpanzees cannot imagine human capacities to relate, create art, conduct in scientific experiments, etc)
  • Has a 1000-year lifespan

It is wholly unrealistic to suspect that such an entity would be treated equally to human beings. Not only would such an entity probably deserve a higher moral merit, but such an entity would likely come to control society relatively quickly, and would impose its moral structures onto humanity just as humanity has imposed its moral structures on other animals (and other humans).

Let it be known that I’m not writing about this idea because I’m excited about it, or enjoy thinking about it – but because I believe it is a significant threat to the wellbeing of humanity and that we are (for the most part) woefully overlooking the reality that moral stratification will necessarily occur from cognitive enhancement.

My first TEDx, recorded in late 2014, was about this exact topic – the need to grapple with creating entities with not just post-human capabilities, but post-human worth. I believe that we cannot plan for or insist on eternal equality with drastically post-human entities, and we should plan for what we should do as a species.

Plans might include:

  • Holding back on the creation of any cognitive enhancements, and enforcing this ban globally
  • Ensuring that cognitively enhanced humans operate in digital worlds and ecosystems where they cannot impose on unenhanced humans
  • Ensuring a kind of universal and gradual cognitive upgrading across humanity broadly
  • Carefully calibrating cognitive enhancements so that powerful post-humans will inherently value unenhanced humans as much as they value themselves
  • Etc…

It’s certainly not a topic I can solve in this article – but it’s a question that begs an answer. Namely:

Sufficient cognitive enhancements will create entities with not just post-human intelligence, but post-human moral worth.

How should we deal with his fact?

How should we plan the transition ahead given the challenge of moral stratification?

These questions deserve serious consideration, the aspiration for eternal equality with drastically enhanced super-humans is, in my opinion, wholly unrealistic, and runs the risk of overseeing a massive moral clash in the next 20-30 years as cognitive enhancement becomes widespread.

Uncertainty in Post-Human Governance

“Dignity is a fundamental human need, insufficiently appreciated in political theory and, indeed, even more central to human nature than the quest for liberty or political freedom. The search for dignity has pushed humanity forward and examples from around the world demonstrate this.

This also explains the crises battering even mature democracies in the West, where people do enjoy political freedom and Constitutional rights but where many live in poverty, alienation, exclusion and hopelessness. Importantly, what I mean by dignity is much more than the mere absence of humiliation. It is a more comprehensive and holistic set of needs that include: reason, security, human rights, accountability, transparency, justice, opportunity, innovation, and inclusiveness.” – Nayef Al-Rodhan, The Sustainable History Thesis

While it might be debated whether or not dignity should be a cornerstone of governance as Nayef suggests, it does seem obvious that it serves an important role.

It is by no means obvious that post-human society would rely on the concept of dignity as some of us do today. As mentioned, a post-human morality may or may not have anything in common with current human morality, and may be as varied from current human morality as current human morality is from that of chimpanzees (and with vastly more permutations than human moral theory has today).

“Ultimately, nobody can be left behind if humanity is to succeed.” – Nayef Al-Rodhan, The Sustainable History Thesis

This is a lovely sentiment, and sentiment that I also feel. It reminds me of Emerson’s quote at the end of his essay about Napoleon:

“As long as our civilization is essentially one of property, of fences, of exclusiveness, it will be mocked by delusions. Our riches will leave us sick; there will be bitterness in our laughter, and our wine will burn our mouth. Only that good profits which we can taste with all doors open, and which serves all men.” – Emerson, Representative Men, Napoleon, or, the Man of the World

That said, I have no idea if it is true.

It seems safe to say that homo sapiens succeeded not by friendly mutual cooperation with other hominids, but by conflict. By no means am I advocating conflict, and I have been writing for years about how humanity might avoid a total arms race dynamic in the path to transhumanism.

Rather, I am stating that our hope for a “tide to lift all boats” is noble aspiration, but we should plan on it being an almost insurmountable challenge that will involve a massive, concerted international human effort beyond anything we’ve achieved as a species in the past.

I suspect that we should brace ourselves with the understanding that creating something vastly beyond ourselves in all morally valuable traits would also imply creating something more morally valuable than ourselves – and that we should tread lightly and carefully in the post-human transition as we will be essentially triggering not just the launch of the grand trajectory of intelligence – but also our fall from the moral pedestal.

 

Note: Nayef Al-Rodhan is among the few living intellectuals that I openly admire. His Twitter handle is a good place to see his range of essays, and I frequently recommend his book Sustainable History and the Dignity of Man. Many of his ideas about the international power dynamics of AI and neurotech are a decade or more ahead of their time – but I suspect we have reason to engage with them sooner rather than later.

Header image credit: IEEE