Sentient Potential and Stoic Ideals
In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors…
Many months ago, Peter Voss (who we’ve interviewed previously on the Emerj podcast) came through San Francisco and we sat down for a coffee at my favorite place in Hayes Valley, Arlequin Cafe. We chatted a bit of business first. Peter runs Aigo.ai, and his work on natural language and AI is a lot of fun to discuss. Similarly, we chatted a bit about Emerj, our SEO strategies, our trials and challenges with hiring for key roles, etc.
Quickly, however, we got down to a topic that Peter has thought about for much longer than I have: AGI. It was nearly 5 years ago when I first interviewed Peter about strategies to develop artificial general intelligence, along with Ben Goertzel and Pei Wang.
Peter has written a good deal about AGI and ethics, and we ended our last chat in the heat of disagreeing about whether more intelligent machines would inherently be more morally “good”. I promised Peter that I’d put together my own thoughts on the matter and keep the conversation going for our next chat – and this short article is an attempt to do just that.
The quotes that I draw from here are copied directly from Peter’s own short article called Improved Intelligence yields Improved Morality, which you can find here: https://medium.com/@petervoss/improved-intelligence-yields-improved-morality-775950db696f
Yes, I know: Intelligent people can be really nasty.
What I want to explore here is why better intelligence is likely to foster more moral behavior.
Let me turn this around, and ask ‘what factors tend to promote undesirable/ bad/ wrong/ immoral actions?
Let’s start with these four:
Fear
Lack of knowledge
Poor thinking/ reasoning
Emotions that try to protect our egos
I desperately want to believe this initial premise.
While there seems to be some truth to this, it seems terribly idealistic. Why… if only we have more knowledge and better ability to reason, we’d act as benevolent and “good” beings.
First, this seems to presume a firm notion of what “the good” is. Certainly, even among human beings, these ideas vary widely. As atheistic or agnostic fancy-smarty-types who study AI, we might have reached an intellectual level where we feel that – surely – we’ve arrived at a firm notion of the good which is ideal. I’m not so sure about this.
Nick Bostrom posits that if you take a spider, and enhance its intelligence to generally human level, it’s values and perceptions and morality would not automatically match that of us smarty-AI-studying types, and indeed it would have as much ground to some notion of “good” as we would for our notion. Even this is presuming “we” as humans have come up with a shared and clearly “right” moral conception, which I wholeheartedly believe to be impossible. While there may be some reasonable “possibility space” of human notions about the “good” (based on our hardware, our software, and the values that we tend to value as homo sapiens in a society), any sort of certainty here is woefully unobtainable.
Fear, one of our basic defense mechanisms, can make us lash out or act hastily. American reaction to 9/11 would be on good example. Reaction to recent US presidential election, another. Or, more generally, when parent’s fear for their children promotes the devastating War on Drugs, or excessive juvenile child-molestation laws and sentencing — destroying numerous lives quite unnecessarily.
While I’m not familiar with excessive child molestation laws that their consequences, I get the overall point here, and I’ll take your word for it.
It seems safe to say that animal fear – where unwarranted – is often an enemy of reason. That isn’t to say that it’s inherently “wrong” in some objective moral sense.
Nayef al Rodhan (among others, I’m sure) argues that emotion is an important factor in our intelligence, and that even fear often involves a kind of congealed subconscious wisdom, not merely an irrational animal feeling. A tingling on the back of one’s neck might not just be an irrational animal feeling hurled forth to make us “bad” or “irrational”, but it might be a kind of intuition built up from so many similar situations that one can’t even articulate why, but one’s “gut” is sending a strong signal that might be valuable.
We might argue that in some ideal future super intelligence, there won’t be intelligence and behavior “shortcuts” in the form of mysterious “feels”, but rather, we’ll have an accountable kind of decision tree of reasoning. Fair enough, without the hardware and software constraints of the human skull, that should be at least feasible. That being said, it doesn’t seem to be 100% clear that “fear = morally bad behavior”, or indeed even that “bullying” fits in the inherently “bad” category in all situations (For instance: Situations that call for strong and immediate leadership, or situations of self-defense).
Humans are inherently not very good at gathering information before acting. We tend to rely too much on previously-formed beliefs and to shun conflicting information. Similarly, we are more comfortable among like-minded people and thus risk an echo-chamber effect. Acting on mistaken or incomplete information often leads to doing others wrong, and/or hurting ourselves. The above examples would also apply here.
While limited rationality is closely related to the knowledge issue, it has a somewhat different dynamic.
Nobody could disagree with your first two sentences in the paragraph above – I’m onboard with you 100% there, and hopefully, AI can help out here.
What isn’t certain, is whether or not a complete non-human superintelligence – with completely non-human goals and values – would arrive at the same ethical conclusions as humanity. Even that last sentence implies that there is some kind of “ideally sagacious human perspective”, which I don’t think exists.
The Mayans were certain that the sun was God.
Some people today are certain that the earth is flat.
Some Christians, or Muslims, are certain that God himself sent their respective prophets to speak the deepest truths to man.
For millennia humans were probably convinced that disease and illness was the result of mystical spiritual forces.
To think that we – petty 21st-century humanity – have some kind of eternal grasp on “ultimate good” seems absurd. Do we not think that the Romans or Egyptians or Tang-era Chinese thought the same? Is it really now that finally we’ve “arrived” at this divine idea?
If anything seems to me to be the hallmark of post-enlightenment Western society, it’s the idea that everything can be questioned, and that our previously iron-clad ideas (about the nature of light, for instance, or Newtonian physics) are in fact able to be shattered entirely, or splintered into far more complex ideas. It’s safer to assume that this is the case, methinks, than the idea that we’ve arrived at moral goodness.
I consider the likelihood of 21st-century humans landing on the Rosetta Stone of moral “goodness” to be just about as likely as chimpanzees doing so. In other words: Nil.
Good reasoning skills require practice — a long-term development commitment. One needs to systematically reduce bad thinking habits, and learn to better avoid logical fallacies. Again, shortcomings here are more likely to make us do things that we will regret — be it entering into harmful relationships or impulsively lying or deceiving.
Certainly, poor reasoning has been the origin of much stupidity and confusion, but there is absolutely no reason to believe that Napoleon or Mao or Robespierre weren’t good at “reasoning”. I’m not necessarily calling any of these men evil, but it seems safe to say that they weren’t saints, either. I suspect that it’s possible to be excellent at “reasoning”, yet still act in a kind of direct or ambitious self-interest.
The idea that through better reasoning we will become more altruistic is somewhat ridiculous. Caesar was almost certainly better at “reasoning” than anyone reading this article now – but those capabilities might not have enhanced his moral standards very much at all.
In addition – which conception of “the good” would become clearer if we were better at “reasoning”?
Would we have a better idea of Aristotle’s golden mean?
Would we have a better idea of how to enact Kant’s categorical imperative?
Would we have a better idea of how to perform utilitarian calculus, so as to maximize sentient wellbeing with our actions?
“Reasoning” can be applied to many and desperate moral theories, because “the good” is remarkably arbitrary and subjective. See the example of Bostrom’s spider mentioned earlier. We can’t expect said spider to magically act more “morally” if it is better able to reason.
In addition – I’d be willing to bet that in the literature and academic circles of “reasoning”, there are competing theories and experts, each “camp” or each “luminary” calling our the fallacies and idiocy of the other. Like “goodness”, we may have little to grasp on objectively here.
Lastly, there are our ‘reptile brain’ impulses related to protecting our egos — emotions that provide short-term (false) comfort to our self-image. These include denial, lying, boasting, anger, and bullying. Obviously, actions or decisions made under their influence have a much higher risk of being ‘bad’.
We’re coming in with some suppositions here:
a) Lying is bad, and is for the wicked
I’m not going to justify lying or tout it as a virtue – but I can’t discount the fact that an AGI may well deceive whoever it wants in order to achieve its goals. To think that “super intelligent” means “always super truthful” seems positively ridiculous. Maybe I’m reading this wrong
b) Lying is driven by the “reptile brain”
From what I gather, deception does seem to show itself in lower animals, even crustaceans. However, it can be argued that deliberate, volitional deception (as opposed to some biologically pre-programmed activity, like cowbirds laying eggs in other bird’s nests) is for creatures with significant cortex, like that we might see in dolphins or chimpanzees.
The same that’s been said here about lying might also be said of boasting and of anger and of “bullying”. Mao’s civil war over power for the Chinese government shouldn’t necessarily be seen as morally “right”, but by golly, it sure worked out given his objectives – as terrible as it is to look back upon. Who knows what kinds of goals a future machine might have. “Bullying” may well be the best tool in the toolbox, and a machine a million times more intelligent than humans may well have the right to bully it’s moral agenda, as we bully our moral agenda on the ants and salamanders and rodents that stand in the way of our clearly-more-important human goals.
Cultural beliefs about “ego” have changed over time. In ancient Macedon and Greece and Rome, ego wasn’t always viewed as a vice (see Renault’s The Nature of Alexander). It takes some gumption and presumption about oneself in order to obtain lofty objectives and hold a difficult line of action. I’m merely making a hypothetical point to find a break in black-and-white thinking, I’m by no means justifying immoral action today or in the past (though, in part, I’m pointing out how subjective “immoral” is). It’s difficult to think that our general human consensus about “ego” (in this arbitrary slice of time) would hold true for an intelligence astronomically beyond our own.
It seems near certain that the concept of “ego” would have drastically different meaning (and depth) to a super intelligent AI than it would to a modern human. We could hope – maybe even presume – that some of the bluster of humanity’s brand of “ego” wouldn’t exist in machines. It is still reasonable to suspect that machines would have concern for their own survival – and there seems to be a chance that – like humans – they’ll be solipsistic beings who count their own experience and consciousness as primary in their decision-making.
Enter AGI.
AGI, or human-level general AI, will by design be much less susceptible to these four factors than we are. All other things being equal, an AGI will be much less likely to engage in harmful or immoral behavior.
Furthermore, and just as important, AGI companions or advisors can become like the fictional angel on our shoulder, guiding us gently towards better decisions: ‘Hey Peter, perhaps you should count to one hundred first, gather more facts, think about it a bit more, and give your emotions a chance to cool down’. Additionally, we’d expect our super intelligence ‘angels’ to proactively give advice and help us think things through.
Advanced AI has the potential to make us better people, and to help create a better world. Yes, there are uncertainties and risks attached to having AGI, but a good case can also be made that we actually need AGI to help guide us through this stage of human evolution, to help save us from ourselves.
I would not disagree that AGI has the potential to make us better people, and to help create a better world. Hypothetically, with the right programming (I’ll leave that part to you, Peter!), we would see less irrational and clearly detrimental behavior from future AGI. My fingers are crossed that this will be the case, and that – at least for a little while – AGI might provide a useful service to humanity.
I would also suspect that serving us smelly little hominids would be the ultimate and eternal goal of an intelligence vastly beyond our own.
I would also suspect that – just as crickets or chimpanzees cannot possibly conceive of our moral ideals and values – we will not possibly be able to conceive of the moral dictates and conclusions of AGI. At some point, it would be like explaining a Montaigne essay to a gerbil.
I would also suspect that, just as spider morality and cricket morality and beaver morality and squid morality vary widely, AGI morality will expand and develop and change and alter as the machine learns and expands its capacities.
I’ve written about the “moral singularity” before, this is the gist of the idea:
As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:
Very few people would disagree with the two statements above. It follows then, that:
As AI augments itself, learns, grows, develops – it will not arrive at a “singularity” in the form of a single set of inviolable moral ideas – but rather – it will explore the possibility-space of subjective moral ideas with such speed and veracity that it may change wildly minute-by-minute, and somewhere in those moral oscillations, humanity is near-destined to be of little importance and destroyed, absorbed, or ignored.
The super-moral insights of a superintelligent AGI might indeed be “more moral” than any previous set of ideas, but it won’t be moral in a way that meshes easily and nicely with our own ideas, and somehow ensures and values human wellbeing. In other words, it may be super “good”, and this may be worthwhile to strive for – but we can’t expect to be treated well in the process.
Call me a pessimist. I don’t necessarily think that human extinction is “bad”, it depends entirely on what kinds of variants of life and living and thriving, and what our role in that might be. I’m optimistic that a merger scenario might maintain our relevance for some longer span of time. In the long haul being digitized and digested may be the best we can hope for.
All in all, the world seems too complicated to be black and white. Particularly when it comes to the most complex of topics – post-human intelligence – thinking in black and white seems all the more uncalled for.
A simple “more intelligent = more moral” rule feels like the epitome of black-and-white thinking, and it seems that there is much more to discuss and question about the correlation between moral “goodness” (or indeed the idea of moral “goodness”) and advanced intelligence.
This article is part of a broader theme of “Reflecting on What I’ve Read” – where I consider and challenge the ideas of my friends in the AI ethics world, and from authors and thinkers who I’ve read recently. If you have suggestions for topics I should consider for future AI and ethics-related articles, feel free to use the contact form here on DanFaggella.com.
Header image credit: Salesman.org
In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors…
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
This article was written as an exploration – and has no initial agenda or objective outside of discovering potential interesting connections through a look at transhumanism through the lens of…
Seneca, Epictetus, Aurelius – even westerners unfamiliar with “Stoicism” recognize many of the names that brought this Philosophy to bear. Today, there are few self-professed Stoics in the world, but…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
With a lot of recent delving into the applications of AI, I had the good fortune to speak with the man behind Pandorabots.com, Doctor Richard Wallace. Dr. Wallace began work…