Moral Singularity – Unpredictable Values Bodes Poorly for Humanity

For the sake of this essay I won’t be talking about morality as some abstract human access to “the good,” but simply as a set of heuristics of principles for making decision, for valuing what to do or what not to do.

Let’s start by defining our little made-up term:

Moral Singularity: A point in time where the values of transhuman or strong AI systems evolve so quickly and radically that it is impossible to predict what the most powerful systems will value – making it impossible to ensure the continued adherence to any previous goal.

It is my belief that a moral singularity will not result in a single, core set of values and principles being somehow “found” in the universe.

Rather, given what I believe to be the inherent arbitrariness and contextual nature of morality (not that it should be ignored, but that it should be understood to be malleable and re-considerable), it seems obvious that transhumanism and AGI will birth a kind of splintered, ever-increasing moral complexity which will be both impossible to predict, and nearly impossible for humanity to survive.

Why the Moral Singularity Seems Likely

My argument can be summarized as follows:

  • Beetle morality (if some basic beginnings of such an idea can exist in beetle minds, which is highly doubtful) would be more complex than that of algae (which we can presume aren’t conscious at all)
  • Rodent morality (which may exist on some rudimentary level) is more complex than that of beetles
  • Chimpanzee morality (which there seems to be ample evidence of) is more complex than that of rodents
  • Human morality (as splintered and disjointed as it is) is more complex than that of chimpanzees

As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:

  • Rodents aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose chimpanzee moral variations
  • Chimpanzees aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose human moral variations

Very few people would disagree with the two statements above. It follows then, that:

  • Humans won’t be capable of imagining, in their wildest dreams, the near-infinite complexities of super-intelligent AI “morality”

In other words:

As AI augments itself, learns, grows, develops – it will not arrive at a “singularity” in the form of a single set of inviolable moral ideas – but rather – it will explore the possibility-space of subjective moral ideas with such speed and veracity that it may change wildly minute-by-minute, and somewhere in those moral oscillations, humanity is near-destined to be of little importance and destroyed, absorbed, or ignored.

 

Arguments Against AGI Benevolence

It might be argued that people of different moral beliefs somehow coexist today, and that this should be evidence that various future intelligences will co-exist tomorrow, too.

I see a number of issues with the presumption of peaceful coexistence:

  1. Nature is Collaboration AND Competition: Humans have warred with each other for eons, and murder and suicide – though maybe less common than 200 years ago – are still rife in the first world. Nature (including humans) is constantly oscillating between cooperation and collaboration. It seems ridiculous to suspect that an AGI would be an exception to that – or to presume we know that it would always be collaborative.
  2. Drastically Diverging Minds Are More Likely to Be in Conflict: Human beings have roughly the same mental “hardware” and “software”, and have somewhat similar needs and perspectives on much of the world. We are all homo sapiens. And yet war and conflict are common. Consider:
    • Humans who are cognitively enhanced may be enhanced in different ways, and may result in drastically varying ways of valuing the world and interpreting life, tearing themselves away from the current fabric of human norms and moral instincts – for better or for worse.
    • Artificial intelligence may begin with essentially none of humanity’s moral instincts, and will have entirely different cognitive structures. The way it “thinks” will be to our thinking, the way an airplane “flies” is to a bird’s flight.
  3. Change in the Number and Size of Minds Will be Rapid and Chaotic: Change has been relatively slow when it comes to the mental capabilities of animals. The transition from some form of ape to homo sapiens took maybe 5 to 8 million years. Artificial intelligence (or enhanced transhumans) may change their mental hardware and software (i.e. the “stuff” that does the valuing and thinking, the basis of morality) drastically day-by-day, making their behavior almost completely unpredictable as their intelligence and understanding expands.

An explosion of variants of intelligence will more likely result in an explosion of variants in moralities – and the value of human life is unfortunately far from secure in a volatile era of expanding post-human intelligences.

What We Do About It

The best that we can do as humans is to hedge against the destructive force of this moral singularity. If AGI and cognitive enhancements become viable, this phenomena is – I believe – likely to occur.

One possible option would be to begin with the best possible moral “framework” for AGI – which is a massively challenging problem in and of itself. The best we can hope for there is a starting point, after which the AGI will expand vastly beyond it.

Another option would be for humans to escape into a mind uploaded personal virtual space (which I’ve referred to as the “epitome of freedom“), where they can explore the far reaches of mental variations and post-human conscious experiences without the ability to physically harm one another.

In either case, I believe human beings need to accept that creating post-human intelligence will imply post-human ways of valuing things, and that this may imply very little value for humanity itself. There is not “AGI will definitely love and care for humanity” scenario.

This doesn’t mean we shouldn’t build AGI.

I would argue that building a worthy successor is the most (maybe the only) important thing that we can possibly do as a species.

We must accept, then, that handing that baton up to AGI will almost certainly imply not just the end of our reign, but the end of our continued existence as a species.

This implies that we should be careful to launch AGI in a way that would build such a worthy successor (see link above), because after we launch, it’s unlikely that a second chance will come. This would imply some kind of global governance, or at least shared vision, around AGI – which is the topic for another article (read: Unite or Fight – The International Governance of AGI).