On Morality in a Posthuman Future, and its Repercussions for Humanity

Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects of life?

Questions involving morality, ethics, and values in the future seem to me both immeasurably important, and inevitably senseless. On the one hand, our transition to transhumanism and our development of technologies that alter or create conscious experience (or drastically and irrevocably change life as we know it) represent the as close to an “ultimate ethical precipice” as humanity could ever conceive. The more thinking, experiencing, consciousness there is out there, the more ethics. Volcano’s weren’t as tragic when all they killed was bacteria. In fact, few bacteria who survived probably even noticed – as bacteria aren’t known for noticing much.

Pompeii was tragic because it buried so many sentient beings. Which billions of those beings, more ethical gravity is at stake, and with we end up occupying the solar system or known universe with an unimaginatively capable and sentient super-intelligence, the stakes go up again by untold factors.

On the other hand, the possibilities of what “morality” may mean or imply in the distant future cannot possibly be known. I will argue that holding fast to a set of moral beliefs may itself be as dangerous as rapid experimentation and iteration or ethical frameworks – though both may seem viable as the initial stages of super-intelligence AI take hold.

Below I’ll aim to explore this dynamic of the usefulness and uselessness of morality in the hypothetical (or not so hypothetical) future dominated by intelligence beyond our own – with the aim of contributing to the conversation of finding a “sweet spot” for pragmatic application today.

Why Post-Human Morality May Mean the End of Humanity

I present a series of suppositions:

  • Chimpanzees (3-5% genetically different from humans) could not possibly understand complex human moral ideas, such as:
    • Law and punishment (from the Code of Hammurabi to US tax law)
    • Marxism
    • Kant’s categorical imperative
  • Human beings could not possibly imagine the moral discernments and modes of valuing and action taken by a post-human intelligence (something as much smarter than humans, as humans are smart than chimpanzees)

Another series of suppositions:

  • If an artificial intelligence is evolving and growing its intelligence, then it is likely growing and expanding and altering its moral positions, its modes of valuing things
  • In all likelihood, many of those “modes of valuing” that such an evolving system would go through would not involve valuing humanity, or would involve a firm decision to end human life
  • Even if such an evolving intelligence continuously switched (in its growing intelligence and understanding) between different ways of treating humanity, with enough of such transitions, humans would likely be exterminated during one of the periods where the machine didn’t value humans, or believed their eradication to be best

If these suppositions hold, then the very development of post-human intelligence is almost certain to eventually lead to the destruction of humanity – or at least to human irrelevance.

(Feel free to agree or disagree on Twitter, this topic has become quite a hot-button issue as I’ve been talking about it)

Importance of Considering Post-Human Morality

Where I see a tremendous kind of importance in moral thought (both in its evolution and in its common acceptance) is in regard to our immediate technological development. Given that the developments in nanotechnology, genetics, brain-machine interface, and others pose the tremendous implications that they do – it seems that some kind of unified ethical principles might aide in the world’s beneficial “progress” in a direction we can agree upon as “good” (admittedly easier said than done).

In some respects, the safe and beneficial development of these technologies seems to be no different than problem-solving in any other domain. The greater the awareness and unity around issues, and the greater the allocation towards their proper resolve, the better off the result would seem to be.

As for present attempts at guiding action in the transition to transhumanism, I admire Dr. Max More’s “Principles of Extrophy.” Chief among the ideas that resonate with me are: A) he is vigilantly aiming to find a grounding for our development without imposing or limiting, and B) he opposes dogma to the point of inviting viewpoints and challenges for better principles of guidance (which, of course, is the only genuine opposition to dogma, anyway).

I have come up with no better or clearer answer to our orientation toward the future outside of open-mindedness, unified efforts (ideally, it seems, this “ethical precipice” would unify or harmonize people and nations), and a vigilant pursuit of what is best. Even in its ideal form, this does not seem to exclude the strong possibility of serious conflict around these issues (i.e. the potential inevitability of Hugo de Garis’ “Artilect War”).

However, without some unity in terms of policies, which changes and developments come first, and even new restrictions and laws around the development and use of these technologies, it seems evident that the technologies of tomorrow have the distinct possibility of getting out of control. Of course, this is also possible even with collaboration, policy, enforcement, etc, but I will assume it to be slightly less so. Our best odds for the construction and success of a “framework” of ethical development or morality seem to involve a collaboration of experts from all realms of knowledge, including politics, science, psychology, philosophy, etc…

Of course, this “vigilant collaboration” only makes things more complicated – especially because it will presumably be individual human minds aiming to distill this wealth of knowledge and meaningfully implement it in the world. This, amongst other reasons, is why I have posited elsewhere that it may be best for our first super-intelligence to be constructed for the sake of aiding in the guidance of our developments – a kind of master of “phronesis” (practical wisdom) applied morality (see my AGI essay on “Finding the Good“). Then again, a bunch of hominids being the boss over a near-deity-level moral discerning engine might defeat the point of said engine. Tough call either way.

Possibilities for Ethics in the Future

The future of “ethics” is uncertain in my mind, though I see the potential for three distinct possibilities in the future (amongst the potential for others I am sure).

First, it may be possible for the morality of the future to be the continuation of the morality of a people, nation, or organization here on Earth. Though an agreement is likely never to be unanimous, enough of Earth’s inhabitants (particularly, those in control of the technology) might come to an agreement upon common tenets, and when / if these tenets are imbued into a super-intelligence, they may be the morality or value system to be carried forward indefinitely.

Second, the future may be a world void of morality. Either the super intelligence(s) of the future will have no innate morality, or their goals and activities would be void of any such notions. There may be a thousand vying human value systems and a super intelligence that cares for none of them and simply pursues a goal or heads towards an aim. The system of “values” of the machine then may never attain the kind of depth of humanities moral conceptions, or it may simply choose not to allocate it’s attentions to pursuing morality – but rather some other aim (IE: expanding into and discovering the galaxy, protecting and bettering the life of man, etc….).

It is the third option, however, that I deem to be the most likely future of what we know today to be “morality.” This third option implies an immeasurably complex and ever-changing system of priorities, beliefs, values, and understandings that humanity is incapable of grasping. Just as our thoughts on values and ethics fly high above the head of a rabbit (who cannot understand and has no notion of what “understanding” is), we should not assume that we would be able to grasp any notion of the “morality” of a super intelligence.

It is safer to assume that such an intelligence would evolve and develop it’s system of morality and values just as has happened with individual people, nations, and eras in the history of man – but in machine this process would occur much more rapidly, and to much further extents than in man. Instead of ideas being passed down in stories or texts, a super-intelligence would be capable of conceiving of all previous moral thought in an instant, and in the next instant extrapolate it’s meaning and repercussions with rational capacity so far beyond that of present-day humanity.

It would seem that even if such an AI was programmed with a given moral framework, this, too could change. If it’s hyper-vigilance in discerning new knowledge and coming to a deeper understanding were applied to it’s priorities and purposes – in addition to fields of study like medicine or finance – then it seems as though it may very well “change it’s mind” and break from the prescriptions we originally endowed it with.

Where an Ever-Shifting Moral Super-Intelligence Leaves Man

This, in my view, does not bode will for humanity’s survival. We would certainly like to program an AI with a human-friendly set of “values,” but it’s re-assessments and vigilant pursuit of it’s aims and it’s notion of the good would likely not take long to bring it’s free thoughts to “moral” ideals that may not involve the nuisance of humanity any longer. It would seem that these infinitely powerful and consuming “oscillations of thought” might at some point yield the thought that humanity ought be either ignored or destroyed.

The arguably most disturbing notion is that these new a further moral conceptions – like the new and further scientific ones – will almost certainly be closer to “correct” than any given human notion. By combining all previous scientific knowledge, new and deep understandings and results will be drawn in every field – and countless new “fields” will emerge when the universe is placed under the “lens” of a sentience that is trillions of times more intelligent than ourselves.

If this same kind of collaboration, rationality, and all-encompassing discernment is applied to morality, it would seem difficult for us to argue a super-intelligence’s conclusions to be “wrong,” for in fact it seems inevitable that it will be “less wrong” than any limited notion that our cortexes might conceive of (ultimately, our disagreement would likely not do much for us, anyway).

I certainly don’t want to be exterminated, but it seems difficult to say that I “know” it is right that I be preserved. Might we, too, have an end like that of Socrates? Given the cup of hemlock poison to drink, it is said that he decided to drink cheerfully, rather than do against the society he chose to live in. If we’ve built machines to think, reason, and do more than we ever could – and we’ve given them permission to be in a position to destroy us – a Socratic fate doesn’t seem like so much of a stretch.

Our best “hope,” it seems, is to intend for (and ensure) an optimally benevolent super-intelligence to begin with, but where we are taken from there – we cannot possibly know. It may behoove us to accept our place as a tangent in the spire of form.