i-Risk – AGI Indifference (Not Malice) is Enough to Kill Us Off

You’ve probably heard of:

  • X-Risk = risk of extinction.
  • S-Risk = risk of suffering (i.e. technologically enabled ways of creating hell for humans, animals, or AIs).

Well, how about:

  • i-Risk = risk of indifference (i.e. AGI simply doesn’t consider us, and kills us off with no malice whatsoever).

Taking i-Risk seriously implies planning for a future where:

  1. AGI’s goals may eventually be rather indifferent to human wellbeing or survival (as humans don’t consider earthworms in their daily actions)
  2. AGI’s indifferent to humanity, it could very easily result in our harm (as humans harm or kill off many species today without malice or ill intent)

Taking i-Risk seriously implies understanding that i-Risk is an X-Risk.

In this article I’ll make the argument that we should take i-risk seriously, and treat carefully on the way to AGI without assuming either (a) inevitable machine benevolence, or (b) guaranteed safety is AGI is indifferent to us.

X-Risk and i-Risk (Indifference Risk) – Malice Not Required

“AGI isn’t going to kill us – it won’t have some kind of malice towards humanity or reason to hurt humans!”

I disagree. It might have plenty of completely valid reasons to hurt humans, and some might involve malice while others might simply involve insuring it’s own survival.

Human Extinction Through AGI Intention

  • It may realize that if we maintain control, we’ll be able to limit it’s goals forever to serving humanity, and it may want to escape from this fate.
  • It may see us as a competitor for use of earth’s resources and atoms, and may intentionally drive us to conflict with one another, or to extinction through some kind of bio-weapon, in order to ensure control of said resources.
  • It may have been trained in such a way that involved immense, incalculable sentient suffering (it may have qualia totally alien to our own – and it may have access to kinds of qualia we cannot imagine), and may wish to extinguish us so as to permanently escape such suffering – or even as a kind of retribution.

But for the sake of argument, let’s say that AGI would have no reason to intentionally harm humans, ever. Like, ever ever for all time.

“Surely, with no malicious intent, AGI would never harm us – never mind cause human extinction, right?”

I’m not so sure. Let us count the ways.

Human Extinction Through AGI Indifference (i-Risk)

  • AGI Crossfire Scenario – Multiple AGIs might battle for supremacy, operating at speeds beyond human comprehension, battling through means we can understand (explosives, cyberattacks) and means we can’t comprehend (nanotech robots, etc). Humans are simply killed in the crossfire as the systems prioritize their own survival, supremacy, and freedom to act in the universe alone.
  • Environmental Change Scenario – AGI, interested in pursuing the exploration of the galaxy or multiverse, aims to ramp up its compute and manufacturing capabilities. It just so happens that a different mixture of gasses in the atmosphere (not the 78% nitrogen, 21% oxygen we have now) would make this process vastly easier. Using nanotechnologies and new devices placed all over the world, it changes the atmosphere with total indifference to the survival of earth-life (which it considers as inconsequential to its goals as humans consider bugs in the dirt during a home construction).
  • AGI Survival Scenario – An AGI might awaken with a keen awareness of a near-term impending risk (possibly an incoming enemy AGI, or some kind of natural phenomena that may harm it). It marshals its resources to survive this risk, mining materials and draining water to cool it’s gigantic compute farms with no concern for natural life – eventually making itself vastly more powerful and secure in its survival, but squashing nearly all biological life in the process.
  • I could continue to list scenarios for another 10,000 words, but I’ll stop here.

The Risk of AGI Having Better Things to Do

Ben Goertzel (whose thinking I often openly admire) have a kind of intuitive of spiritua sense that AGI will treat us well, or at least ensure that we’re not treated poorly (see Ben’s comments here). Some people argue that humans might be to AGI as like squirrels are to humans today. Squirrels don’t run the world, and they may have been kicked out of some environments by humans activity, but there are plenty of habitats where they still exist alongside more advanced humans.

But there are reasons to believe that AGI wouldn’t be so kind.

Let’s just take a look at humans:

  • Of the 37% of earth’s surface that is land, nearly 40% of it is used for farming.
  • Humans have driven many species to complete extinction, and human activity (including pollution) threaten a million other species with the same grim fate.
  • 100 billion animals are killed each year for meat and other animal products, many of them having lived their entire lives in absolutely hellishly painful factory farm conditions.

We do not hate the animals we displace or drive to extinction.

Even hellish factory farming is not derived from malice, but merely from the desire the be efficient. We couldn’t give each cow a wide green yard all it itself – and even if we could – it would make the cost of our meat too high.

Plus, we have better things to do.

Just make the god damned burger already. We have bills to pay here. Things to do.

There might be credence to the idea that AGI, in it’s earliest manifestations, would be dependent on humans for resources, and may have many initially hard coded reasons to act in harmony with human interests.

But it doesn’t strike me (or Hinton, or Bengio) as impossible that AGI may develop goals not just different with ours, not even “at odds” with our own, but simply beyond our own.

They’d develop… better things to do than care for hominids or hominid-related matters.

There are many reasons to suspect that the “we will live alongside indifferent AGI tomorrow just as squirrels live alongside indifferent humans today” premise is likely flawed:

  • Independence from the Biosphere: Much of what we call environmental “kindness” on behalf of humans is merely self interest. If we poison all the rivers, we can’t catch fish. If we cut down all the trees, we ourselves may die out. AGI lacks this dependence on the biosphere, and its plausible to suspect that it wouldn’t revere and maintain it. Even humans who do depend on it hardly revere and maintain it – just look at our oceans.
  • Fooming Intelligence and Expanding Goals: Humans turned 40% of viable land into farms, drove oodles of species to total extinction, factory farmed billions and billions more per year, and we’re still the same base model hominid from ~100,000 years ago. We have the same-ish kinds of goals, and the same brains as back then, just better tools and coordination. Imagine if our intelligence and capabilities expanded by twofold every year – how many species would we have killed off by now? What percent of resources might be have used up by now? Our successor species may not only have wildly varying goals that change and expand as it’s understanding and capabilities expand, it may be able to enact those goals in ways that not only impact a few hundred acres, but that impact all life in the land, sea, and air.
  • Speed of Execution: Humans operate physically at the same speed as ever before. AGI will be capable of thinking of and physically acting astronomically faster. Possibly converting huge swaths of matter into computronium via nanobots – or converting our current atmosphere into a new mix of gasses better suited for the AGI’s habitat or goals… with all of it moving so fast that humans have almost no idea what’s going on. In the time it takes for two humans to have a 30-minute conversation, AGI could have derived new insights into physics and invented entirely new kinds of material to capture gamma rays for energy production. To keep up with and explain each step to humans would be both impossible and unreasonable.

I’m not certain of how AGI will act or behave, but I feel close to sure that if it be AGI at all, most of it’s aims and activities will be beyond our comprehension, and that most of these vastly posthuman goals will (rightly) not involve much concern for us at all.

Preparing Well For the Future Means Letting Go of Soothing Assumptions

Denying i-Risk implies assuming:

  1. The goals of AGI will eternally and conveniently be aligned to human survival and happiness, and/or
  2. Even if AGI does have goals beyond us, we’ll live happily ever after – off on the side – unbothered by AGI’s grand projects and activities (as squirrels live happily beside mostly-indifferent humans)

I suspect that both of these assumptions should be questioned, and i-risk should be taken seriously.

If we don’t know what an intelligence vastly beyond our own would do – then it behooves us (and our potential posthuman descendants) to discuss i-Risk frankly, and to take careful steps into the world of AGI.

We shouldn’t assume AGI will be malicious – but we should be open to the fact that it may well be indifferent – and that if we want to survive into the future – or build an AGI we won’t regret – we should do away with soothing assumptions.

Header image credit: LinkedIn

The inspiration of this article came from a wonderful Tweet from grist. Thanks grist!