The “moral singularity” refers to how morality – a particular system of values and principles of conduct – will evolve radically once (a) human intelligence and cognition is enhanced, and (b) artificial general intelligence (AGI) enters the world.
It is my belief that a moral singularity will not result in a single, core set of values and principles being somehow “found” in the universe. Rather, given what I believe to be the inherent arbitrariness and contextual nature of morality (not that it should be ignored, but that it should be understood to be malleable and re-considerable), it seems obvious that transhumanism and AGI will birth a kind of splintered, ever-increasing moral complexity which will be both impossible to predict, and nearly impossible for humanity to endure.
My argument can be summarized as follows:
- Beetle morality (if some basic beginnings of such an idea can exist in beetle minds, which is highly doubtful) would be more complex than that of algae (which we can presume aren’t conscious at all)
- Rodent morality (which may exist on some rudimentary level) is more complex than that of beetles
- Chimpanzee morality (which there seems to be ample evidence of) is more complex than that of rodents
- Human morality (as splintered and disjointed as it is) is more complex than that of chimpanzees
As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:
- Rodents aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose chimpanzee moral variations
- Chimpanzees aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose human moral variations
Very few people would disagree with the two statements above. It follows then, that:
- Humans won’t be capable of imagining, in their wildest dreams, the near-infinite complexities of super-intelligent AI “morality”
As AI augments itself, learns, grows, develops – it will not arrive at a “singularity” in the form of a single set of inviolable moral ideas – but rather – it will explore the possibility-space of subjective moral ideas with such speed and veracity that it may change wildly minute-by-minute, and somewhere in those moral oscillations, humanity is near-destined to be of little importance and destroyed, absorbed, or ignored.
It might be argued that people of different moral beliefs somehow coexist today, and that this should be evidence that various future intelligences will co-exist tomorrow, too. I see a number of issues with this presumption:
- First, humans have warred with each other for eons, and murder and suicide – though maybe less common than 200 years ago – are still rife in the first world.
- Second, human beings have roughly the same mental “hardware” and “software”, and have somewhat similar needs and perspectives on much of the world. We are all homo sapiens. And yet war and conflict are common. Consider:
- Humans who are cognitively enhanced may be enhanced in different ways, and may result in drastically varying ways of valuing the world and interpreting life, tearing themselves away from the current fabric of human norms and moral instincts – for better or for worse.
- Artificial intelligence may begin with essentially none of humanity’s moral instincts, and will have entirely different cognitive structures. The way it “thinks” will be to our thinking, the way an airplane “flies” is to a bird’s flight.
- Third, change has been relatively slow when it comes to the mental capabilities of animals. The transition from some form of ape to homo sapiens took maybe 5 to 8 million years. Artificial intelligence (or enhanced transhumans) may change their mental hardware and software (i.e. the “stuff” that does the valuing and thinking, the basis of morality) drastically day-by-day, making their behavior almost completely unpredictable as their intelligence and understanding expands.
The best that we can do as humans is to hedge against the destructive force of this moral singularity. If AGI and cognitive enhancements become viable, this phenomena is – I believe – likely to occur.
One possible option would be to begin with the best possible moral “framework” for AGI – which is a massively challenging problem in and of itself. The best we can hope for there is a starting point, after which the AGI will expand vastly beyond it.
Another option would be for humans to escape into a mind uploaded personal virtual space (which I’ve referred to as the “epitome of freedom“), where they can explore the far reaches of mental variations and post-human conscious experiences without the ability to physically harm one another.
In either case, I believe human beings need to accept that creating post-human intelligence will imply post-human ways of valuing things, and that this may imply very little value for humanity itself. There is not “AGI will definitely love and care for humanity” scenario. I believe we should also accept that if we create post-human intelligence, we are handing off the baton of leadership for future intelligence. While we shouldn’t rush this hand-off, we should eventually embrace it.
In summary: An explosion of variants of intelligence will more likely result in an explosion of variants in moralities – and the value of human life is unfortunately far from secure in a volatile era of expanding post-human intelligences.