A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
The universe doesn’t seem to be inherently imbued with meaning, but it would seem that anything we could do to find said meaning, to make our time here “worthwhile”, would fall into one of two broad categories.
I argue that the creation of artificial general intelligence should also be done in order to pursue these two goals:
“Finding the good” seems to fall again into two broad categories:
While “doing good” is the usual focus of artificial intelligence, it seems somewhat clear that “finding the good” is a higher long-term objective. I’m sure there’s much more beyond both concepts, but as a human being, the pursuit of what the good is seems to be more abstract, challenging, and ultimately important than executing on feeble and limited notions of the good itself. I’ve written previously about the idea of a moral singularity, and I’ll repeat one of my main ideas here:
As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:
- Rodents aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose chimpanzee moral variations
- Chimpanzees aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose human moral variations
Very few people would disagree with the two statements above. It follows then, that:
- Humans won’t be capable of imagining, in their wildest dreams, the near-infinite complexities of super-intelligent AI “morality”
To presume that the skull of homo sapiens houses the highest conceivable moral ends would be immeasurably presumptuous. Morality has evolved with intelligence, and if we believe that our goals are higher, more informed, and “better” than those of rodents or chimpanzees, then we should also presume that the goals and moral principles or ethical decision-making of future intelligence will be vastly beyond our own – and maybe very well should be vastly beyond our own.
It is a dangerous idea, because it doesn’t presume human preeminence, and I argue (in the moral singularity article linked above) that drastic changes in moral principles will make for an uncertain place for human beings as they are.
The long-term aim of artificial intelligence isn’t merely to pursue the moral aims of humanity – but given a long enough time horizon – the goal is to explore goodness itself, and how morally worthy aims – in all their complexity – could be pursued and achieved. I would argue that “doing good”, even in the utilitarian sense, is still just the best homo sapiens-level “grasp” of goodness, and eventually will be overcome. Take the two thought experiments:
We can understand the incompatibility of the first example because we know what human and chimp morality is like. We cannot imagine what superintelligent AI morality is like, but somehow we often have the gaul to presume that our moral intuition (as disjointed and incoherent as it often is), is the guiding compass for the future trajectory of all sentience, not just humanity. A silly presumption.
By no means am I advocating an eagerness to obliterate humanity for the sake of producing artificial general intelligence, nor am I advocating an eagerness to forgo human moral intuition and hand over the reigns of discerning the good to machines as soon as possible.
Rather, I’m making it clear that in the long-term, should we produce entities with vastly more cognitive capacity, creativity, and wisdom than ourselves, we should presume that they would be able to help us with our greatest task: Namely, determining what the hell we should be doing in the cold, mute universe (read: Ethics).
I explored both “AI for doing the good” and the danger of “AI for finding the good” TEDx presentation for Cal Poly:
Header image credit: Singularity Hub
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
In an interview with Wired about his work building a brain at Google, Ray Kurzweil was asked about his thoughts on Steve Jobs’ notion of death as a natural part…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…