Negative Utilitarianism Won’t Take Us Where We Need to Go

What matters more, happiness, or suffering?

For almost all of us – utilitarians or not – both positive and negative qualia matter, and register when we make decisions about our own lives or decisions that might affect the lives of others.

Negative utilitarianism (NU), however, places a greater relative value on suffering. The Wikipedia definition for negative utilitarianism will serve for the purposes of this article:

A form of negative consequentialism that can be described as the view that people should minimize the total amount of aggregate suffering, or that we should minimize suffering and then, secondarily, maximize the total amount of happiness.

In this article, I argue for three reasons that NU is an untenable moral position, one that should absolutely not be used to guide the trajectory of post-human intelligence.

Reason 1: Negative Utilitarianism (NU) Presumes Pain “Weighs” More Than Pleasure

It has been argued that even one day in hell (total extreme suffering) would negate a near-eternity in heaven (total extreme pleasure).

To put such an overwhelming (infinite?) weight on negative qualia seems ridiculous. In Brian Tomasik’s words:

Tomasik
Source: https://reducing-suffering.org/three-types-of-negative-utilitarianism/

My friend Andres Gomez Emilsson once mentioned (though I’m certain he’s not the first to have done so) that humans might be prone to weight pain over pleasure because our experiences of pain are relatively more intense than our ranges of bliss. Unless we’re under the influence of chemical substances, the human pain-pleasure range is probably more like -100 to 10, not -10 to 10. So, reducing pain feels more prescient. Even if this wasn’t the case, negativity bias might achieve the same kind of preference for pain reduction.

Some thought experiments:

  • We might imagine a world where all life is genetically programmed to only experience gradients of bliss, a la Pearce’s “paradise engineering.” An entire planet teems with blissful life in varied forms, yet one beetle is born with mutant genes, and this beetle suffers intensely from physical pain, and will suffer so for the entirety of its 4-month lifespan. Has our project of life failed?
  • If you enhance your own mind with vastly more capabilities and vastly more bliss – but an equal ratio of painful qualia as you experience today – would you do so? If the ratio of pain you experience now was so terrible we might expect that you’d decide to opt-out, but here you are – flawed and pained – reading this article.

Assuming pain and pleasure had a “weight,” it seems reasonable to presume that maximizing pleasure would be just as viable a guiding tenet as reducing suffering. Both seem to “matter” in a utilitarian sense.

Reason 2: NU Stunts the Progress of Intelligence

Avoiding one being in “hell,” or to avoiding one drop of negative qualia – would require vastly less potential (and vastly less speed) for the expansion of sentience itself. We might imagine two gigantic, planet-size compute AGIs in a given galaxy.

  • One aims at proliferating more intelligence and relatively more bliss (roughly, scenario 8).
  • The other aims only to snuff out the last vestiges of suffering any form.

We might right wonder whether the second guiding principle could even arrive at AGI in the first place (I suspect it couldn’t). Playing defense alone obviously leaves us at a local maximum, a potentially blissful world – but woefully fettered and limited.

From a policy perspective “do no harm” is important. Legally, I don’t want someone to be able to steal my laptop if they give me something more valuable – or slap me across the face if they then pay for a few nice dinners. Simply barring others from doing harm seems right. This is also partially because harms are easier to subjectively grasp (find me utopia that wouldn’t seem like a dystopia to most people) – but it’s also a simple way to order a society. In this regard, I agree with Emilsson that the “the ‘ought not to be’ aspect of experience is more real than the ‘ought to be’ aspect of it.”

But here we’re talking about directing the trajectory of intelligence itself – not ordering a just society. From this perspective of the universe, “do no harm” doesn’t make as much sense as a guiding light.

Some people argue (rather compellingly) that the world as it is includes vastly more suffering than happiness. That same world has bubbled up dinosaurs, flowers, rodents, man.

The “spires of form” have all come up through this pain machine of nature. That pain machine bubble up to you and I, sitting here and discussing utilitarianism and strong artificial intelligence.

Would you prefer to have stayed an amoeba or a slime mold? If we had not evolved – if the bumbling process of nature’s fecundity hadn’t thrown us here – we wouldn’t even have the opportunity to discuss morality. Do we suspect that we – homo sapiens – have achieved the highest possible understanding of the good with our feeble little idea of utilitarianism… or might there be vastly more important goods that higher minds might conceive of – just as we conceived of vastly higher goods than the slime molds, lizards, and rodents before us.

This brings us to reason #3.

Reason 3: NU Presumes That Qualia is What Matters Most

It seems obvious that qualia is extremely morally relevant. I’ve dedicated entire TEDx talks to convincing crowds that qualia – in all its higher and lower forms – deserves our attention as a firm anchor in our moral decision-making about technology, and humanity’s future.

But is qualia really what matters most?

I suspect it isn’t. I suspect it’s merely an aspect of nature and life that we homo sapiens have access to (senses), and a label for (reason).

Any time I can lean on the Concord Sage to clothe my thoughts in words, I do so – as he never fails to frame ideas more clearly and eloquently than I:

“But in all unbalanced minds, the classification is idolized, passes for the end, and not for a speedily exhaustible means, so that the walls of the system blend to their eye in the remote horizon with the walls of the universe; the luminaries of heaven seem to them hung on the arch their master built. They cannot imagine how you aliens have any right to see,–how you can see; ‘It must be somehow that you stole the light from us.’

They do not yet perceive, that light, unsystematic, indomitable, will break into any cabin, even into theirs. Let them chirp awhile and call it their own. If they are honest and do well, presently their neat new pinfold will be too strait and low, will crack, will lean, will rot and vanish, and the immortal light, all young and joyful, million-orbed, million-colored, will beam over the universe as on the first morning.” – Emerson, Self-Reliance

The infinite range of possible ways to value and act is obviously outside of our grasp – and the “stuff” of the universe is almost entirely unknown to us.

There are almost certainly higher goods that we cannot conceive of – just as rodents and slime molds couldn’t possibly conceive of utilitarianism.

It’s myopic to focus on only the goods we can imagine. It’s ridiculous to presume that Jeremy Bentham – or the Jains before him – rightly picked up on positive and negative qualia as the ultimate, eternal element of moral relevance. It seems just as likely that some beetle in 3,000,000 BC would have predicted democracy or Marxism, or discovered calculous.

Conclusion, and the Author’s Opinion

If I didn’t know so many negative utilitarians who I liked as people, I’d call it a position of literal cowardice – even vice. Denying future intelligences a wide birth to come forth – a vast opportunity to bloom into things more important than we can presently design – seems to literally stunt intelligence and sentience itself. Only an end to all life (an aim that few NUs actually advocate for) or the creation of literally hell could be worse.

I don’t presume to know the highest good that humanity – or post-humanity – should strive for. That’s wise cricket stuff, which I aim to avoid. I don’t presume to know much at all.

In fact, I presume so little, that I presume that identifying the end-game target for post-human life is downright wrong, because we don’t know enough to point to any kind of “ultimate” good by which we can guide the trajectory of intelligence.

Afterthoughts

The comments below were made on one of my Facebook posts by the philosopher David Pearce. In this post I – in a harsher tone than I’d like to communicate usually – stated:

If I didn’t know and respect so many negative utilitarians, I would openly call it a vice, and a position of childish, seething cowardice.

I interviewed David back in 2013, and consider him a friend and a significant influence on my thought, evidenced by the fact that I reference his work consistently on this blog. Not a month goes by when I don’t recommend others read his work on sentience and transhumanism.

Our views about the long-term trajectory of intelligence differ in major ways – and I find our respectful discourse useful. Not having the time to reply on Facebook, I wanted to reply to David’s questions below. I hope this serves to round out many of the ideas I expressed above.

(1) Do you have the same gut-reaction to a Buddhist ethic [i.e. that it is a position of cowardice]? Or is your response to a secular Buddhist ethic like NU shaped by the worry that we’d press the hypothetical OFF button? (cf. https://www.quora.com/What-does-David-Pearce-and-other-transhumanist-think-of-the-benevolent-world-exploder-or-the-red-button-thought-experiment?fbclid=IwAR3zegSF0i0iRwwkkAl4AzHvpn-2yNRZrka15y8vG9-sK_bL42iLT_rtoyU)

At present, I don’t know enough about the “Buddhist ethic” (I suspect there are many of them) to make a firm judgment call there. As a general rule I think that any set of moral tenets laid out by a single dead person are not the be-all and end-all of seeking the good.

Per the “off button” (by which you’re referring to the extermination of all life for the sake of preventing suffering), I share the opinion that you articulate in the Quora post above.

That said, with or without the “off button,” having the guiding light of the trajectory be a fear of suffering seems misguided. Surely let us not create hells – but if there is a mix of qualia involved – or even if it is aggregately more pleasure than pain – then let it persist, and bloom until we can find higher goods.

(2) Would you “walk away from Omelas”? (cf. https://www.utilitarianism.com/nu/omelas.pdf) Or keep on partying? To rule out status quo bias, imagine that a genie offered you the chance to create New Omelas – a fabulous transhuman civilisation based on superintelligent bliss – at the price of the perpetual torture of a single child. Would you accept the genie’s offer?

I believe that the lotus-eaters are never safe. Full stop. One either steps boldly into the fray of the state of nature – or one is commanded by those who do. I might not walk away from Omelas, but I’d sure as hell not take my hands off the wheel of influence, of work, of progress, of direction.

If they reprogrammed my brain to dumb and blissful subservience, maybe that wouldn’t be all that bad. But better would be to have that same bliss while having more capacity to do, act, influence, create (I go into this in more depth in yet another article that references your work generously).

As for the creation of Omelas via the torture of one child. Brother as you well know there are a trillion insects and a billion other animals breathing their last – right now – while being eaten alive, kicking and screaming. If one unfortunate child be the cost – you’d be a monster not to stop the pain machine and put the bantling on the altar.

3) A CU genie offers me the chance of super-exponential growth in my happiness at the comparatively trivial price of the exponential growth in your suffering. As a NU, I’d decline. If I’m persuaded by the genie’s seductive arguments for CU, then I should accept. Indeed I’m morally bound to do so. Can you advise?

My man, if I am on that particular altar, then woe to me – but I couldn’t expect you to save me if the upside was that great.

4) Some forms of suffering are so bad that anyone who experiences such horror would bring the world to an end to make it stop. Condemning NU is a case of shooting the messenger. Rather than trying to marginalise suffering-focused ethics, I wish the transhumanist / EA / rationalist community would devote its energies to tacking the root of all evil. Alas, the biology of suffering is not an intellectually “sexy” topic like e.g. an AI apocalypse.

I don’t condemn the idea that negative qualia should be avoided, but I condemn suffering being weighted astronomically higher than pleasure. You yourself state that life is worthwhile, mostly. Despite the mix of both. I don’t relish pain, I don’t believe it must exist to have pleasure.

Like you I say: “Pain should be abolished, we should have all its ‘upsides’ without any of the qualia downsides.”

Unlike you, I say “If eliminating suffering drastically slows the grand trajectory, then we set a threshold for the pain-pleasure balance we find acceptable, and press the hell onward.”

Unlike you, I say “Jeremy Bentham did not find the eternal good, he found one important good that we hominids have intellectual access to – but beings beyond us will find higher goods – and missing out on those would be as ridiculous as crickets setting the moral rules for mankind.”

But what are you and I doing here – arguing with reasons – or cloaking our temperaments in arguments?

The latter, I suspect. It’s mostly as good as we can do as hominids.

References

  • https://www.hedweb.com/negutil.htm
  • http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/
  • https://reducing-suffering.org/three-types-of-negative-utilitarianism/
  • https://qualiacomputing.com/2018/10/10/thoughts-on-the-is-ought-problem-from-a-qualia-realist-point-of-view/

 

Header image credit: Curbed