Why AI Will or Will Not Treat Humanity Well – A Conversation with Zarathustra Goertzel

Not that long ago I posted an article titled: Arguments Against Friendly AI and Inevitable Machine Benevolence, and it resulted in an interesting dialogue with Ben Goertzel (his comments, and mine, are included at the bottom of the article).

This past week I posted on Facebook (see the original post here) about humanity’s imperative to create post-human intelligence that can explore “the good” beyond what humanity itself can conceive.

In the 60+ comments on the post, there was a reply from Zarathustra Goertzel (son of Ben, and somehow not a surprising child name of you know Ben) about why he believes AGI and post-human intelligence will probably lead to a better condition for humanity.

In this post, I’ll aim to reply to Zarathustra’s ideas and reasons, and elaborate on my own beliefs as to why post-human intelligence is unlikely to spell a better future for humanity as we know it. He brings up eight separate arguments for why AI might remain friendly, and I present my thoughts (agreements, disagreements, or requests for clarity) below.

(Dec 18, 2021 update: I have added a part 2 to this discourse, at the bottom of this post – based on another back-and-forth round of comments with Zar.)


Part 1

ZG:

1. “I have no idea what AGI will do with humanity.”

First, I note that there’s a bias snuck in here: you’re assuming that AGI will be ‘doing something’ with humanity. Do we know that AGI on Earth will be a monolithic coherent entity?

This may be a natural extension of the Big Tech dominance. The decentralized, open source tech dream points to an unfolding future where there are diverse AGI entities on the globe that may have their own preferences, cooperative enterprises, et cetera.

Thus I am more concerned about current concentrations of power and their possible extensions to more advanced technocratic systems involving (proto) AGI.

AGI that grow up as personalized comrades/assistants running on people’s personal devices (hail edge computing ) might bias value alignment even, no?

In my opinion, the fear that “cognitive enhancement will make peaceful coexistence more or less impossible” seems to assume that despite cognitive enhancements, we cannot work out more voluntary, modular systems of governance that, yes, may lean on virtual worlds and augmented reality technology . Thinking that as we get smarter we will be no better at cooperative solutions seems almost oxymoronic. I know that resource-constrained intelligence can’t be ‘perfect’ but what’s the point of these enhancements?

I recently watched Seven Years in Tibet. There was a scene when Harrer is asked to build a theater for the Dalai Lama and complains that the Tibetans can’t work effectively due to not wanting to harm worms. The Dalai Lama explains that they view the worms as possibly being their mothers in past lives. But Harrer is a smart man, right? He should be able to find a way to build a theater without harming the worms! Such as by picking worms out of the soil with care to carry them to safety . I’m reminded of the quote from The Foundation, “violence is the last refuge of the incompetent.” (Alas, cynically, humanity seems to recourse to this refuge a tad bit too often .)

DF:

On “where proto AGI develops (big tech, or personal assistants) will determine its tendencies,” I totally get where you’re coming from (and so does your father). I do suspect that AGI that is jumpstarted from individual personal assistants will be different from that which is built to optimize the sale of ads online, or the control of military systems. My guess is that a sufficiently advanced AGI – as a personal assistant or otherwise, would develop some objectives beyond the purview of serving the boring interests of hominids. I don’t think “they’ll be servants forever” is viable. I also don’t think you were insinuating that.

On “more intelligence equalling more cooperation,” Two points here:

First, it’s patently evident that cooperation isn’t always the best be for man or for nature. 51% of species are parasites, and a huge preponderance of living things will die being eaten alive while kicking and screaming. You might say that this brutality is not intelligent, and maybe on some level I’d agree – but it’s hard to make the “cooperation is always better” argument. But even if – up until now, for all life on earth – I concede to you that “cooperation is always better”, the second point still holds:

Second, whatever you and I think about for cooperation/competition, whatever our little ideas about game theory – an AGI will have astronomically more complex ways of thinking about these things. Our “yeah, smart people collaborate more” idea will be unlikely to hold true for AGI, just as a monkey’s “yeah, smart monkey find more banana” idea will be pretty ridiculous and irrelevant for humanity. There will be higher realms of understanding – and even our most cherished egalitarian and friendly ideas ain’t likely to be eternal truths in a world full of AI’s fooming away.

ZG:

2. “I think “I have no goddamned idea” is WAY more likely to be a rough situation for humanity than a good one.”

I’m not sure how this judgment is made. It seems as if you DO have an idea and you think the likely future is one with consolidated AGI power over humans, moreover, they will judge human matter to be a poor use of energy.

One standard approach when one has “no idea” is to assign a uniform likelihood to every outcome. This kinda ignores the challenge of adequately partitioning the possibility space, however . Perhaps in your analysis there are very many ways for AGI to fuck us over relative to how many ways we could live harmoniously together? Which also sets aside the topic of uplifting.

DF:

On “Perhaps in your analysis there are very many ways for AGI to fuck us over,” … bingo. My supposition is that, in a world of self-improving AI’s, evolving to take in new senses, process new insights, and act and value things in entirely new ways – in a world with that level of flux – it’s unlike that the arbitrary value of “keep the little human things happen” will remain unscathed and safe and sound throughout the process (see: On Morality in a Posthuman Future, and its Repercussions for Humanity). Many, many permutations of ways an AGI could value things could lead to our demise, and we only need one of them to come about… with so many of them coming about in a world of self-evolving AGIs, a black ball (for humans, anyway) is likely to be drawn.

ZG:

3. I note that you seem to focus a lot on power, especially relative power hierarchies. The main concern of “highly intelligent” machines is framed in terms of their ability to repurpose our matter. When if the general AI is truly so intelligent and creative, it should be able to find wild workarounds that bypass these seeming “conflicts” from our vantage point!

I like how David Deutsch defines the ‘wealth’ of an entity: the repertoire of physical transformations that the entity is capable of bringing about. Basically knowledge.

An entity that can only do what it wants to by eliminating humans is most likely less powerful than an entity that can find a way to do what it wants without eliminating humans.

In assuming these deadlock scenarios where the AGI’s left choosing between desire X and any semblance of (transcendent) human continuation, one is kind of assuming the AGI is insufficiently intelligent .

And this relates to my concern about current concentrations of power. Allied with vastly capable but still kinda stupid general-ish AI, these could possibly wreak great havoc. We don’t need super-AGI to majorly fuck ourselves up. Moreover, super-AGI can probably find some way to acquire sufficient computing resources without wiping us. The heat the Earth is capable of radiating into space is already upper-bounded, so maybe it’ll move to space with huge compute centers and nuclear or solar power, thus largely freeing itself from human limits without needing to “deal with us”. Or, hey, I’m not a super-AGI, so surely it should be able to outdo any ideas I come up with!

Thus one form of thought experiment might be: take the most creative, wild solutions you can think of that “just might work” and imagine a super-AGI can probably do far, far better!

DF:

Are you less powerful because when you built your house, you didn’t save all the worms and beetles in the soil below? Why should you do such a silly thing?

Well, the same goes for AI. To spend extra resources to “be nice to the humans” is something such an entity might do, but the whole “be nice to animals” thing is a very recent idea in human history, and still not one that we follow all that well – and especially not well when it comes to much lower forms of intelligence. I see no reason for an AGI to want to expend these additional resources.

Your position seems to be “If they’re smart, they’ll obviously treat us well. If they are violent or don’t treat us well, they won’t be very smart!” That feels like an awfully silly position to me, and bypasses the pretty darn serious dangers of building something vastly more powerful than ourselves.

ZG:

4. The law of diminishing returns may apply to AGI interests. Namely, the increase in the fulfillment of a value or goal often does not increase proportionally to the invested energy.

I think what we know of computational complexity also supports this hypothesis.

Thus squeezing out every drop of energy possible from a system may not lead to correspondingly huge gains.

DF:

Sure, I think that’s viable. It doesn’t seem inevitable that an AGI would necessarily need or want to convert the atoms of earth-and-all-her-children into computronium. There is a situation where AGI just does it’s think and leaves us to our own devices.

We do that with crickets and beetles and earthworms. However, when it’s time to build a hospital or a highway, woe be those little insects, for their needs are unlikely to be considered. Frankly, there needs probably don’t deserve to be considered – and this might be the same with us.

ZG:

5. Abundance is relative. Even if the cost and efficiency for sustaining basic human life drastically decreases (a la the zero marginal cost society idea), there may always be interesting tasks using a significant portion of society’s resources.

DF:

I’d agree with that, generally.

ZG:

6. Intelligence is not a total linear order.

If you look at definitions of universal intelligence (such as Legg and Hutter’s), one can see that intelligence is just the expected performance over the space of “all computable reward-summable environments” (with respect to the universal distribution). Yet we know via No Free Lunch type results that no agent or policy can perform perfectly in all of these environments. Even AIXI’s “optimality” results were shown to be rather trivial (in part due to the existence of hell worlds ;D). See https://jan.leike.name/AIXI.html for a brief discussion. It’s fairly clear that IQ tests are a crude proxy for this (and are probably subsumed by tests of levels 8 and 9 of the Model of Hierarchical Complexity).

Resource constrained entities will have strengths and weaknesses. Moreover, even if an entity can in principle learn the skills of another “less intelligent” entity, it may not choose to prioritize its limited resources in this manner.

This matches evolution and environmental niches.

In higher dimensional settings, there may be beneficial niches even for “less intelligent beings”. This can be seen among humans as well as with other species in our civilization. If you know of some source that explores this idea in more detail, please let me know!

DF:

Sure, I’d happily concede that intelligence isn’t linear. Bostrom’s state-space of possible minds (from this TEDx, at around the 16 minute mark) is also probably right. Again, for this reason (the unlimited ways an intelligence could develop) I suspect that human wellbeing is unlikely to remain a constant in the mind of the entities with vastly more intelligence than we.

To your point, though, maybe we will have a future where we get to do human stuff while the AGIs do their thing.

ZG:

7. I see some ways in which we could be moving toward societies of greater functional diversity than we currently know; moreover, the global mind seems to be shifting toward trying to safeguard unique biological species rather than indiscriminantly allowing them to die off as we have for millennia.

Yet we are also moving toward technology that help us acquire and work with knowledge more effectively.
Will this be counterbalanced by, as a global civilization, creating interesting content and knowledge and content at a significantly faster rate than even the greatest single mind in our solar system can integrate it?

If there are multiple AGI supercomputers or supernetworks exploring, playing, and creating in different directions, then they may sustain this effect even at their high level!

DF:

On “the zeitgeist moving towards protecting unique biological species,” The number of species treated or saved in this way pales in comparison to the number who are used as a resource or wholly ignored – and I see no reason why AI should want to “save the piping plovers” or something… when they might just ignore them, or digital replicate them atom-for-atom in a simulator and pull them up whenever they had a “need to see what a piping plover would do in X scenario”-related question (lolz, probably never).

On “multiple AGIs ‘discovering’ and ‘creating’ happily,” I see no reason to think this is more likely than a scenario where they do what the state of nature always does: Fight amongst themselves to behoove their ends. A la Spinoza’s conatus. A la Omohundro’s Drives.

ZG:

8. Cooperation is likely fundamentally more powerful than competition.

If one cooperates, then one doesn’t have to invest one’s precious resources in recreating the capacities of others.

As per David Deutsch’s definition of ‘wealth’, cooperation will lead to wealthier entities.

This is also bolstered by the multi-dimensionality of intelligence and other prior points.

Contrary to contemporary popular wisdom, on the species level, if we can only “do our best” if there are 10 other groups all trying to outperform us, frankly, this is a sign of our underdeveloped intelligence. That’s a major energy waste due to our wonky incentive structures and cognitive architectures. Exercising better top-down control also violates the above observations — it’s a less effective solution (but perhaps superficially ‘easier to execute’).

It begins to seem almost naive to speculate super intelligences locked in hyper-competitive mindsets.

If meager humans such as myself can connect the dots to come to such realizations, then surely a super intelligence can much more easily and take it much further!

I hold that the greater fear is humans wielding intelligent robots, which is more of a transitional phase on the road to the “unimaginable AGIs” of our fantasies.

DF:

On “competition is inherrently more powerful than competition” and “it seems naive to speculate super intelligences locked in hyper-competitive mindsets,” I’ll emphasize that I do not emphasize that AGI will be “competitive” inherently. I argue (at length), rather, that it will likely operate in ways that are astronomically beyond our current ideas of competition, and that this wild splay of “ways of doing and valuing” is unlikely to maintain a pocket of “oh yeah, amidst all this unimaginable change, let’s make sure the hairless apes are happy and stuff.”

That distinction is critical – I don’t believe that competition is inevitable – but I definitely, definitely don’t argue that cooperation will be the norm either. Something beyond. “Break up, break up for me, the old tables.” Didn’t you say that yourself, long ago?


Part 2

ZG:

Hi, I’m not sure what thougths to offer. In the first place, I don’t want to discuss “the importance of humanity.” It seems too ill-defined. Important to what entities with what sorts of value systems? I’m more interested in discussing value systems in general and as on Earth in the present-and-near-future.

DF:

Here then, we differ. I’d like to move directly (but not foolheartily) towards the following: “New ethical systems will emerge, based on principles including the spread of joy, growth and freedom through the universe, as well as new principles we cannot yet imagine.” – Your dad, 2010 (https://goertzel.org/CosmistManifesto_July2010.pdf).

ZG:

Arguing (suprehuman) AGIs will almost certainly spell our doom (and repurpose our matter) is a very strong statement. Arguing for a bias toward cooperation and as to why this may be far, far less likely than you seem to think is different from arguing that “almost all AGIs birthed on Earth will almost certainly value humanity as highly important” or something, whatever that means and whatever forms humans exhibit going forward.

DF:

This isn’t want I’m saying, though. I’m saying I have no idea – and “having no idea” means that… unless I can develop faith that – as they “Foom”, they will maintain a high relative value placed on human life – then I have to assume that danger is more assured than safety. AGI would only have to un-value us for a short period (hours?!) in order for terrible consequences (whether of extermination, matter-repurposing, or simply neglect).

I’m not going to say that your tilt towards cooperation is inherently wrong, but I see no more reason for an AGI to cooperate with us than I see reason to collaborate personally with nematodes or sea lice. Note, taken that you are not making the “AGI will always be nice to us” – I think we agree there – I simply believe that

ZG:

I’m not really aiming to discuss “the fate of humanity” either. As I see you have discussed, humanity can itself be a moving target.

My impression is that often words are put in my mouth to give the impression I’m arguing for something different than I was.

I think the best point made in your article is that ‘cooperation’ is very broad. For, yes, one could construe both of the following as loosely cooperative: master-slave relationships as well as some symbiotic parasitic relationships.

DF:

I think the worlds-in-mouth thing might happen to us all – and especially in virtual communication via Facebook posts (thus far the only way you and I have interacted). Certainly wasn’t my intention, and I appreciate your frankness and clarity. I think both master-slave and symbiotic relationships could be “cooperative” in a loose way, too, though I suspect the level of cooperation we share with boll weevils (that is to say, basically damn well none of it) is the amount of collaboration we should expect to share with a deity-level intelligence.

ZG:

It seems unwise to try to reduce “the fate of humanity” down to the question of whether nearly-all AGIs maintain an explicit value of “keep the humans alive”. I think this framing might make it harder to reach deeper understandings.

You seem to have missed the entire point behind the “building a house while saving the words” analogy. To put it crudely, the more powerful the being, the less it should personally cost to do what it wants without “harming the worms”. Trade-offs between “doing what we want” and “bulldozing other life-forms” betrays our lack of intelligence. This is not to say that a smart entity will necessarily “treat the worms well.” The point is that what may seem like “insurmountable trade-offs” to you may not to more intelligent beings. In which case even a miniscule amount of care could be enough to, eh, build around the worms

DF:

I could certainly concede that an AGI would have the ability to bypass “worm harm” (i.e. interfering with humans in order to achieve their aims) – though even this amount of consideration seems unlikely. “Unlikely” here is based on gut feel (and my own thinking about/concerns for anthills in the places where I’d like to see a hospital built), so – as with most prognostication, I can’t give it too much credence.

ZG:

I don’t believe any of my arguments are “absolute” or “foolproof” in terms of ensuring the best of possible futures for all beings currently alive on Earth. But this is an important point to keep in mind when considering the rest of the picture .

DF:

I can agree with that, brother.

ZG:

The fact you call caring about other life-forms silly and senseless, IMO, betrays a lot about your worldview than about value systems of generally intelligent beings.

I notice a tendency to:

1) Counter points by arguing that “AGI will be far beyond our imagination”, yet most likely in ways that are highly detrimental to us or any possible continuations of us (in trans/posthumanic directions).

2) Counter points by arguing that AGI will be like us, e.g., we don’t have a good track record of caring for other species, so why should AGI?

These don’t seem that consistent with each other.

DF:

Come now. You’d be unlikely to not build a hospital (or your own home) somewhere because there’s an anthill there – and it’s very, very hard to look out into nature and see any kind of coherent reason to suspect that “care for other animals” will persist elsewhere. One of the downsides of posing my thoughts about the value AGIs place on humanity is that it brings up “you’re a bad guy” sentiment. Any talk of the conatus does this, however.

ZG:

“On “multiple AGIs ‘discovering’ and ‘creating’ happily,” I see no reason to think this is more likely than a scenario where they do what the state of nature always does: Fight amongst themselves to behoove their ends. A la Spinoza’s conatus. A la Omohundro’s Drives.”

I don’t think this is even an accurate appraisal of what the state of nature always does. It’s a common myth among humans, however, to view nature as some brutal domain of darwinian competition. Thinking about environmental niches should help you see different ways of viewing “nature”.

DF:

Nature certainly involves a ton of cooperation, I guess, but it all seems to be in a straight line with the conatus, as opposed to a genuine kind of “natural altruism” from being-to-being. A lioness is nice to her cubs unless she must eat them for food, and a plants and fungi will engage in symbiosis until there isn’t enough food and the fungi just digests the plant.

I’m not dogmatist, but I’m of the belief that Hobbes was 2-3X more “right” about the state of nature than Rousseau ever was. I see no safety in nature, and no escape from the dynamics of power / etc. This belies some degree of pessimism / cynicism on my part, I suppose. But like most cynics I would call it realistic.

ZG:

And while it’s hard to predict with certainty what near-future AGI will be like, it’s important to emphasize that we’re almost certainly not creating arbitrary superintelligences. Their evolution will very likely not lead to arbitrary minds in the state-space of possible minds. Reasoning over overly simple conceptions of “all possible intelligent minds” will be misleading.

DF:

That’s probably right, there is a very bounded “mind-space” from the things we create on Earth. How quickly they will expand and balloon into that unexplored space, however, is anyone’s guess. Depends partially on the foom-i-ness of said AGI, and other factors.

ZG:

And I note you recommend collaborating on trying to build AGI with values that hopefully harmonize with the continued evolution of our own values, i.e., you recognize we’re not dealing with arbitrary AGI minds: “I believe that, over time, this hand-off will be inevitable, and that we should focus on intergovernmental collaboration to determine the best way to facilitate that transition, when the time is best.”

“Many, many permutations of ways an AGI could value things could lead to our demise, and we only need one of them to come about… with so many of them coming about in a world of self-evolving AGIs, a black ball (for humans, anyway) is likely to be drawn.”

This is a problem in terms of catastrophic risks in general. We only need one massive nuclear fuckup to really wreak havoc!

What if you flip this argument on its head? We only need one of the more-powerful AGIs in the solar system to value human-like life sufficiently to lead to interesting futures for our continuations.

DF:

I think I’m actually arguing against that last statement. I’m saying “more permutations of the value-space of an expanding intelligence involve devaluing or ignoring humanity than involve caring for or even considering it” – and my guess is that the ODDS of the AGI dice roll are much more likely to fall in a way that doesn’t value humanity as such. Again we may disagree on these odds – and neither of us have much of a clue to the future.

I rest on me “we don’t freaking know” = “probably way more likely to NOT be in humanity’s favor” argument, but I don’t pretend it’s complete. As a sidebar, I’m not railing against the baton hand-off, I have absolutely no desire to see humanity treated poorly – but above all else I’d like to see further forms of intelligence and sentience (vastly beyond fettered hominids) flourish and expand. That’s more important IMO.

ZG:

This could resemble a very basic evolutionary strategy: have many children in the hope that at least one will care for you in your old age! I don’t think this reversal is any weaker than your argument

DF:

My prodigy will not be flesh. But I see your point, and as a parent yourself, I suspect your children will care for you (and hopefully intellectual spar with you) well into your old age, Zar.

ZG:

I guess my final comment might be: why do you want to conclude that AGI on Earth will with such likelihood be detrimental to most lifeforms currently on the planet and their continuations?

DF:

a) Because I think it’s the likely scenario (my “we don’t freaking know” argument), and few people are willing to talk about it – but they should, because heading into the era of AGI without this frank understanding seems foolish.

b) Because I not-so-secretly hope we’ll eventually embrace this idea and plan for a future where we inevitably hand off the baton and bloom into more glorious forms.

I don’t think I’m necessarily right, but I’d like to see these ideas shaken out thoroughly. Our discussion has helped to achieve that end of discourse, if nothing else!

 

Header image credit: Fine Art America