Is Artificial Intelligence Worse Than Nukes? Maybe – but it May Be Better, Too

Both artificial intelligence and nuclear warheads could both wipe out human life, though it might be possible for superintelligent AI to be even more destructive.

That being said, AI is more than an “existential risk.” It’s also a potential “utilitarian bloom.” The creation of self-aware and superintelligent AI would arguably be the most morally relevant event imaginable.

Pardon the 5th-grader graphics, but this hastily assembled image sums of the point of this short article:

AI vs Nukes

Concerning ourselves exclusively with the impact of AI on humans (i.e. our extinction) is speciesism (or sometimes, carbon chauvinism), a preference for the sentient impact of one’s own species are being more important than the net tonnage of suffering or wellbeing imposed on other sentient life.

Over the last two hundred years, humans have “expanded the moral circle” (to borrow the term from Singer, who has a book with roughly the same title) to include:

  • People with different religious or racial backgrounds than ourselves
  • Mammals like dolphins or chimpanzees
  • Livestock, including chickens and cows

Yet potentially sentient machines often don’t get included in that circle… even if they potentially have a vastly more rich and deep sentient experience than humans ever will?

We’ve polled dozens of AI researchers – and many of them consider sentient artificial intelligence to achievable by 2050-2060. That doesn’t mean it will happen, but that many of the smartest people in the AI domain don’t consider it impossible. If sentient and superintelligent AI were to be created (and if such a grand sentient intelligence could expand its abilities and awareness by itself), this would – in my opinion – easily be the most morally relevant event imaginable by homo sapiens.

At 14:20 into my TEDx at California Polytechnic State University, I represent positive sentient experience as blue, and negative sentient experience as red, and make the potential case for the creation of superintelligence:

If you’re brave (or bored?) enough to watch the entire presentation, you’ll see that I’m far from arguing that humans should give way to AI, either now or in a future scenario. What I am arguing, is that the moral worth of a hypothetically sentient and unimaginably intelligent AI would not be deniable – and it’s an issue we need to grapple with as we figure out what’s after people, and as the fecundity of nature smatters and bungles its way into new forms:

“Each material thing has its celestial side; has its translation, through humanity, into the spiritual and necessary sphere, where it plays a part as indestructible as any other. And to these, their ends, all things continually ascend. The gases gather to the solid firmament; the chemic lump arrives at the plant, and grows; arrives at the quadruped, and walks; arrives at the man, and thinks.” – Ralph Waldo Emerson, The Uses of Great Men

The “chemic lump” has things to arrive at after and beyond homo sapiens – and just as we could be grateful that the chimpanzees didn’t agree to stop evolving (for how much richness and intelligence would have been lost!), our future intelligences – whether cognitively enhanced humans or artificial general intelligence – will likely be grateful that they were permitted to spring forth (some might argue – for how much richness and intelligence would otherwise be lost!) – a point I’ve made before in an article titled “Can AI Make the World a Better Place?

Certainly, artificial intelligence is an existential risk.

Unlike other existential risks, however, it is not merely a risk – it is a potential opportunity to expand the very thing which is morally valuable: Rich, intelligent sentience itself.

Nukes and global warming do not share this same overt potential upside.

It’s not only about the impact to humanity – it’s about the impact on net tonnage of self-aware “stuff” in the universe, and in that regard, AI and nukes shouldn’t be put in the same bucket.

AI is not simply playing with fire, it is potentially playing God… a distinction which should be made clear in any debates focused exclusively on “risk” alone.

 

Header image credit: egypttoday.com