Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
You’ve probably heard of:
Well, how about:
Taking i-Risk seriously implies planning for a future where:
Taking i-Risk seriously implies understanding that i-Risk is an X-Risk.
In this article I’ll make the argument that we should take i-risk seriously, and treat carefully on the way to AGI without assuming either (a) inevitable machine benevolence, or (b) guaranteed safety is AGI is indifferent to us.
“AGI isn’t going to kill us – it won’t have some kind of malice towards humanity or reason to hurt humans!”
I disagree. It might have plenty of completely valid reasons to hurt humans, and some might involve malice while others might simply involve insuring it’s own survival.
Human Extinction Through AGI Intention
But for the sake of argument, let’s say that AGI would have no reason to intentionally harm humans, ever. Like, ever ever for all time.
“Surely, with no malicious intent, AGI would never harm us – never mind cause human extinction, right?”
I’m not so sure. Let us count the ways.
Human Extinction Through AGI Indifference (i-Risk)
Ben Goertzel (whose thinking I often openly admire) have a kind of intuitive of spiritua sense that AGI will treat us well, or at least ensure that we’re not treated poorly (see Ben’s comments here). Some people argue that humans might be to AGI as like squirrels are to humans today. Squirrels don’t run the world, and they may have been kicked out of some environments by humans activity, but there are plenty of habitats where they still exist alongside more advanced humans.
But there are reasons to believe that AGI wouldn’t be so kind.
Let’s just take a look at humans:
We do not hate the animals we displace or drive to extinction.
Even hellish factory farming is not derived from malice, but merely from the desire the be efficient. We couldn’t give each cow a wide green yard all it itself – and even if we could – it would make the cost of our meat too high.
Plus, we have better things to do.
Just make the god damned burger already. We have bills to pay here. Things to do.
There might be credence to the idea that AGI, in it’s earliest manifestations, would be dependent on humans for resources, and may have many initially hard coded reasons to act in harmony with human interests.
But it doesn’t strike me (or Hinton, or Bengio) as impossible that AGI may develop goals not just different with ours, not even “at odds” with our own, but simply beyond our own.
They’d develop… better things to do than care for hominids or hominid-related matters.
There are many reasons to suspect that the “we will live alongside indifferent AGI tomorrow just as squirrels live alongside indifferent humans today” premise is likely flawed:
I’m not certain of how AGI will act or behave, but I feel close to sure that if it be AGI at all, most of it’s aims and activities will be beyond our comprehension, and that most of these vastly posthuman goals will (rightly) not involve much concern for us at all.
Denying i-Risk implies assuming:
I suspect that both of these assumptions should be questioned, and i-risk should be taken seriously.
If we don’t know what an intelligence vastly beyond our own would do – then it behooves us (and our potential posthuman descendants) to discuss i-Risk frankly, and to take careful steps into the world of AGI.
We shouldn’t assume AGI will be malicious – but we should be open to the fact that it may well be indifferent – and that if we want to survive into the future – or build an AGI we won’t regret – we should do away with soothing assumptions.
Header image credit: LinkedIn
The inspiration of this article came from a wonderful Tweet from grist. Thanks grist!
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…