A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
The philosopher David Pearce has posited that the pain-pleasure axis of conscious experience may well be the “world’s inbuilt metric of value,” a kind of ultimate barometer of good or bad. While I agree with Pearce that positive qualia (positive conscious experience) and the absence of negative qualia are the best general moral outcomes we now know how to strive for – I cannot agree that this is the world’s “inbuilt metric,” but merely that it is among the highest moral ideas that hominids can now think of.
I posit the following hypothesis:
There are vastly higher moral – vastly higher forms of the “good” itself – than humans can imagine or access.
I’ll spare all of the reasons I believe this to be the case, but I’ll share one quote from my essay Moral Singularity, where I explore much more as to why I believe morality will expand beyond any forms humans have now imagined:
As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:
- Rodents aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose chimpanzee moral variations
- Chimpanzees aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose human moral thinking
Very few people would disagree with the two statements above. It follows then, that:
- Humans won’t be capable of imagining, in their wildest dreams, the near-infinite complexities of super-intelligent AI “morality”
In this essay, my aim is rather to explore what those higher goods might be. As stated in the quote above, I’m nearly certain that there are a vast number of future moral ideas or ideals – or means of valuing and acting – that I won’t be capable of imagining (as your pet hampster would have a hard time imagining nuclear non-proliferation).
That said, I believe there are strings I can pull on that might plausibly lead to these higher kinds of moral ideals – higher than anything we can now conceive, but plausible.
I cannot say with certainty “this is higher than utilitarianism”…
…but I can ask “If this is the case, isn’t it plausible, maybe even likely, that this might lead to a way of valuing and sorting the world that is more nuanced, useful, and true than utilitarianism?”
Here are three such jumping-off points for the plausibility of entirely higher and loftier kinds of “good”:
Again, I don’t consider any of these ideas to be “true,” but I consider them to be plausible gateways into possible “higher” moral ideas, ideas that entities beyond human beings (vastly augmented transhumans, or AGI) might access.
“Morality is not a set of freestanding abstract truths that we can somehow access with our limited human minds. Moral psychology is not something that occasionally intrudes into the abstract realm of moral philosophy. Moral philosophy is a manifestation of moral psychology. … Competing moral philosophies are not just points in an abstract space, but the predictable products of our dual-process brains.”
While Greene touts the usefulness and validity of utilitarianism in the book (in practical ways that I generally agree with), his frank statement above is one of many reasons I can’t agree with the David Pearce quote that I opened this essay with.
Namely: Our moral ideas are products of our hardware and software. This isn’t to say they aren’t valuable, valid, or maybe even relevant a thousand years from now (on some level).
If you believe that the monkey suit has given us access to the highest moral truths ever conceived … if you believe that while the evolution of technology and intelligence will go on and on, the evolution of morality ends with our eternally true hominid concepts… if you believe that our highest moral ideals today will hold true to vastly more intelligent AGI… then I’m afraid I have to disagree. I may even playfully call you a wise cricket.
Not only are human values in conflict constantly (there is no clear agreement, no clear conflict-ending ultimate moral insight, and plenty of wildly divergent ideas of what is eternally morally true – based in neuroscience, or is religion, or whatever else), but – to lend more credence to Greene – even if there were, it would point to a commonality in our hardware more than a common and eternal moral truth somehow floating in the aether.
I have no certainties about morality in the future. In fact, the only thing I’m certain of is my uncertainty (read: Arguments Against Friendly AI). What I can say with near-certainty is that whatever we think is true now won’t hold true to beings more intelligent than ourselves, and that a blooming of new ways of acting and valuing (roughly “morality”) will occur right alongside the blooming of new forms of intelligence – as it always has.
Roman Yampolskiy’s paper Unexplainability and Incomprehensibility of Artificial Intelligence opens with a number of apt quotes, including the following:
“Some things in life are too complicated to explain in any language. … Not just to explain to others but to explain to yourself. Force yourself to try to explain it and you create lies.” – Haruki Murakami
I have interviewed philosophers and future thinkers with PhDs – people who I respect greatly, and have learned much from – who nonetheless feel almost certain that the values they hold most dear will remain dear and true for future superintelligences. That their parental bonds, or their cultural traditions, or their love of walking in the forrest, will be naturally understood by any higher intelligence, and will be enhanced and cherished – at least respected. Nay, I say, nay.
These same folks can easily smile on the trite comforts of religious belief. What complexities of thought they must develop to find those same comforts for themselves in morality and philosophy. If we want to take our future with superintelligence beings seriously, I think it behooves us – for safety’s sake if nothing else – to drop all pretense that our moral ideas have any more staying power than our physical form. If we want to pretend to build AGI to “do good,” we should consider “exploring the good” to be a priority – possibly above any other moral idea.
Header image credit: GapUP.ca
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…