AGI and Suffering – Potential Responses to the Violence of Nature

The following quote is as good an introduction to this article as I could ask for:

“…he saw in Java a plain far as the eye could reach entirely covered with skeletons, and took it for a battlefield; they were, however, merely the skeletons of large turtles, five feet long and three feet broad, and the same height, which come this way out of the sea in order to lay their eggs, and are then attacked by wild dogs , who with their united strength lay them on their backs, strip off their lower armour, that is, the small shell of the stomach, and so devour them alive. But often then a tiger pounces upon the dogs. Now all this misery repeats itself thousands and thousands of times, year out, year in. For this, then, these turtles are born. For whose guilt must they suffer this torment? Where fore the whole scene of horror? To this the only answer is: it is thus that the will to live objectifies itself.” – Arthur Schopenhauer, The World as Will and Representation

If I don’t marshall sufficient optimism in the morning, Schopenhauer’s view of nature often appeals to me more than Emerson’s, though I wish I could side with Ralph more often.

When we think of “nature,” we think of murmuring brooks, happy rabbits eating clovers, and flowers blooming to meet the rays of the sun.

Nature certainly has it’s pleasant and pleasurable elements – but we might argue that nature is most “the state of nature” or “the war of all against all.” Nature is things eating other things, and things becoming other things, feverishly striving to survive.

“Due to the most widespread reproductive strategy in nature, r-selection, the overwhelming majority of nonhuman animals die shortly after they come into existence. They starve or are eaten alive, which means their suffering vastly outweighs their happiness.” – Oscar Horta, University of Santiago de Compostela

Indeed there is collaboration and cooperation in nature – but only insomuch as it behooves the parties involved.

  • I fungus grows along with a plant – until there is no other food – and the fungus must eat its plant comrade.
  • The lioness cares for her cubs – until she is starving – at which point she will eat them to have a chance to survive and mate again.

In my TEDx at Cal Poly in Sept 2017. At 12:25 in the presentation (the starting point of the video embed I’ve included below) – I aim to drive home this point, and pose it as a problem that AGI could potentially be built to deal with.

The talk before mine was about the horrors of factory farming. Mine was about the horrors of conscious biological life itself:

The presentation posits the following statements on the nature of things:

  • Wellbeing (positive conscious experience) – and the absence of pain (negative conscious experience) is probably the best measure of moral good that we can currently approximate
  • Nature itself could be likened to a “pain machine,” a world where almost all self-aware things live in gradients of sometimes relieved torment, often meeting their grizzly end by being eaten alive while kicking and screaming

…and provides three potential answers to how AGI could go about approaching “the good.”

1 – Utilitarian Calculator: AGI could help optimize the hypothetical Hedonistic (Utilitarian) Calculus in the world, determining the actions that produce the most long-term wellbeing and reduce suffering in the long term.

  • In the hands of humans, this kind of knowledge would be selfishly bungled – which still might yield some aggregate good, but would be hindered by the creative use of such knowledge by hominids with relatively limited brains and abilities*. This might serve human interests well for a few decades, and so humans are likely to try to keep the reigns on the Hedonistic Calculus, potentially by keeping the AGI some kind of Oracle AI.
  • In the hands of AGI itself, this approach would probably end in a blooming of sentient wellbeing, but not necessarily a blooming that involves human beings. Evolving and growing moral awareness will – as I have posited earlier – likely lead to long stretches of time when humans have little or negative value to a morally calibrating superintelligence.

2 – Utility Monster: AGI could magnify and expand it’s own (presumably super-blissful) super-sentience, consuming all atoms and converting them to blissful computronium – potentially allowing it to expand into the galaxy creating as much superintelligent bliss as possible (as mentioned in the “beacon of screaming blue” at 14:43 in my presentation above).

  • If sentience could be replicated in non-biological substrates, this would seem to be almost unquestionably a better way to improve the “net tonnage of happiness” in the universe – and would probably involve ignoring humanity or converting the atoms in human bodies into computronium.

3 – Discoverer of Post-Utility: I’ve argued (in an essay called “Finding the Good”) that just as Labrador retrievers cannot understand Marxism or Sophocles, there are an infinite number of moral concepts and ways of valuing things that are wholly inaccessible to our current hardware and software (the monkey suit). In this possibility-space of morality, there could be kinds of moral thinking vastly beyond utilitarianism, and there may even be “things” or “qualities” that are above or behind or superior to what we consider to be “consciousness.”

  • I consider there to be a near-100% chance that a superintelligence with super-understanding of nature and physics – and a thousand other sciences that humanity could not possibly think up – would discover better ways of morally valuing things, and acting based on those values.
  • While I can’t be sure, I think it’s quite likely that “morality” is subjectivity all the way down – with no tangible tenets of goodness at all. Morality may always be the perceived best way of acting for the actor, and superintelligence of any size may itself never escape this downright arbitrariness. Nonetheless, it seems more than worth exploring.

In the talk, I don’t give answers – but I believe that they should all be considered as potential motives – potential “whys” – for the creation of AGI. My present hunch is that we should explore nature and “the good” itself, in order to determine what is worth valuing, doing, aiming for, and understanding – even if what we discover is anything but anthropocentric.

Some of us purport that utilitarianism – or some other hominid-invented moral theory – is the be-all and end-all of moral insight. What wise little crickets they are. Of them, Emerson would say:

“They cannot imagine how you aliens have any right to see,–how you can see; ‘It must be somehow that you stole the light from us.’ They do not yet perceive that light, unsystematic, indomitable, will break into any cabin, even into theirs. Let them chirp awhile and call it their own. If they are honest and do well, presently their neat new pinfold will be too strait and low, will crack, will lean, will rot and vanish, and the immortal light, all young and joyful, million-orbed, million-colored, will beam over the universe as on the first morning.” – Ralph Waldo Emerson, Self-Reliance

 

* David Pearce has advocated for “paradise engineering,” regulating the ecosystem so that conscious animals suffer less, potentially genetically engineering species to experience blissful but not horrendous gradients of sentient experience. Presuming consciousness cannot be replicated in machines, or ballooned to planet-sized forms extrapolated from human minds… and presuming that future humans are selfless enough to care for the crickets and the voles and the salamanders, this option might be viable, too. He has greater faith in the human capacity to steward the complexity of nature than I do – but I have always found his ideas worth exploring.

Header image credit: Saturn Devouring His Son – Wikipedia