Higher Good – AI and the Pursuit of Moral Value Above Utilitarianism

The philosopher David Pearce has posited that the pain-pleasure axis of conscious experience may well be the “world’s inbuilt metric of value,” a kind of ultimate barometer of good or bad. While I agree with Pearce that positive qualia (positive conscious experience) and the absence of negative qualia are the best general moral outcomes we now know how to strive for – I cannot agree that this is the world’s “inbuilt metric,” but merely that it is among the highest moral ideas that hominids can now think of.

I posit the following hypothesis:

There are vastly higher moral – vastly higher forms of the “good” itself – than humans can imagine or access.

I’ll spare all of the reasons I believe this to be the case, but I’ll share one quote from my essay Moral Singularity, where I explore much more as to why I believe morality will expand beyond any forms humans have now imagined:

As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:

  • Rodents aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose chimpanzee moral variations
  • Chimpanzees aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose human moral thinking

Very few people would disagree with the two statements above. It follows then, that:

  • Humans won’t be capable of imagining, in their wildest dreams, the near-infinite complexities of super-intelligent AI “morality”

In this essay, my aim is rather to explore what those higher goods might be. As stated in the quote above, I’m nearly certain that there are a vast number of future moral ideas or ideals – or means of valuing and acting – that I won’t be capable of imagining (as your pet hampster would have a hard time imagining nuclear non-proliferation).

That said, I believe there are strings I can pull on that might plausibly lead to these higher kinds of moral ideals – higher than anything we can now conceive, but plausible.

I cannot say with certainty “this is higher than utilitarianism”…

…but I can ask “If this is the case, isn’t it plausible, maybe even likely, that this might lead to a way of valuing and sorting the world that is more nuanced, useful, and true than utilitarianism?”

Here are three such jumping-off points for the plausibility of entirely higher and loftier kinds of “good”:

  • Emergent consciousness. There might exist emergent super-conscious things. Humans are made up of many little cells and bacteria. It’s possible that societies are conscious at some meta-level that we don’t understand – or that the earth or the universe is conscious in some Spinozian way. Shouldn’t we understand consciousness and its myriad manifestations before we start optimizing for earth-life, or human-life, or something else? Isn’t it plausible that optimizing for those meta-conscious entities will be better than optimizing for ourselves… and isn’t it plausible that they would exist on a plain beyond the fettered little pain-pleasure axis that we humans can imagine? If the universe – from a supernova to a granite pebble, are in some way (more or less) conscious, isn’t it plausible that optimizing for their “wellbeing” would involve entirely new kinds of understanding than what we have about animal happiness?
  • Alternative universes. Let’s presume that heat death will certainly end all life in our universe eventually. There might be alternative universes where heat death won’t occur… and our actions may only matter insomuch as we escape into that eternal world. All suffering in this limited world won’t matter (in ultimate classical utilitarian terms) when compared to the happiness that could be in such an eternal world – might that not alter our priorities for acting in this world? Let’s say that those alternative universes contain myriad new kinds of entities, super-intelligence and super-conscious in ways we can’t imagine. Is it possible that our actions now would be best served by behooving the aims of these great beings, rather than those of our own wellbeing?
  • Consciousness as one arbitrary checkpoint in life’s development. Once there were just proteins. Then cells. Then multicellular organisms. Then organisms with sensory organs. Then muscles. Then consciousness (though when consciousness arose is less certain than the other steps). So now this morally worthy “stuff” (consciousness) has emerged. Isn’t it plausible to suppose that there is another blooming of “stuff” beyond consciousness? At a higher level of complexity or brainpower or in other brian substrates, we should expect that new vistas of morally relevant “stuff” will bloom forth – can we really argue that such a thing won’t happen? Isn’t it plausible that such a thing is in fact likely, given enough time and development? Optimizing for the utilitarian good would then be wasteful, as it would prevent us from optimizing for higher kinds of good.

Again, I don’t consider any of these ideas to be “true,” but I consider them to be plausible gateways into possible “higher” moral ideas, ideas that entities beyond human beings (vastly augmented transhumans, or AGI) might access.

Joshua Greene’s book Moral Tribes – recommended to me by Wendell Wallach – contains a paragraph that will serve us well here:

“Morality is not a set of freestanding abstract truths that we can somehow access with our limited human minds. Moral psychology is not something that occasionally intrudes into the abstract realm of moral philosophy. Moral philosophy is a manifestation of moral psychology. … Competing moral philosophies are not just points in an abstract space, but the predictable products of our dual-process brains.”

While Greene touts the usefulness and validity of utilitarianism in the book (in practical ways that I generally agree with), his frank statement above is one of many reasons I can’t agree with the David Pearce quote that I opened this essay with.

Namely: Our moral ideas are products of our hardware and software. This isn’t to say they aren’t valuable, valid, or maybe even relevant a thousand years from now (on some level).

If you believe that the monkey suit has given us access to the highest moral truths ever conceived … if you believe that while the evolution of technology and intelligence will go on and on, the evolution of morality ends with our eternally true hominid concepts… if you believe that our highest moral ideals today will hold true to vastly more intelligent AGI… then I’m afraid I have to disagree. I may even playfully call you a wise cricket.

Not only are human values in conflict constantly (there is no clear agreement, no clear conflict-ending ultimate moral insight, and plenty of wildly divergent ideas of what is eternally morally true – based in neuroscience, or is religion, or whatever else), but – to lend more credence to Greene – even if there were, it would point to a commonality in our hardware more than a common and eternal moral truth somehow floating in the aether.

I have no certainties about morality in the future. In fact, the only thing I’m certain of is my uncertainty (read: Arguments Against Friendly AI). What I can say with near-certainty is that whatever we think is true now won’t hold true to beings more intelligent than ourselves, and that a blooming of new ways of acting and valuing (roughly “morality”) will occur right alongside the blooming of new forms of intelligence – as it always has.

Roman Yampolskiy’s paper Unexplainability and Incomprehensibility of Artificial Intelligence opens with a number of apt quotes, including the following:

“Some things in life are too complicated to explain in any language. … Not just to explain to others but to explain to yourself. Force yourself to try to explain it and you create lies.” – Haruki Murakami

I have interviewed philosophers and future thinkers with PhDs – people who I respect greatly, and have learned much from – who nonetheless feel almost certain that the values they hold most dear will remain dear and true for future superintelligences. That their parental bonds, or their cultural traditions, or their love of walking in the forrest, will be naturally understood by any higher intelligence, and will be enhanced and cherished – at least respected. Nay, I say, nay.

These same folks can easily smile on the trite comforts of religious belief. What complexities of thought they must develop to find those same comforts for themselves in morality and philosophy. If we want to take our future with superintelligence beings seriously, I think it behooves us – for safety’s sake if nothing else – to drop all pretense that our moral ideas have any more staying power than our physical form. If we want to pretend to build AGI to “do good,” we should consider “exploring the good” to be a priority – possibly above any other moral idea.

 

Header image credit: GapUP.ca