The “Good Monster” – The Entity We Might Have to Build

Defining the Term

A utility-monster is Rober Nozak’s thought experiment about an entity who could be so happy as to outshine all possible happiness of all possible humans, and humans (by utilitarian logic) would be obligated to serve this monster – and even “bow out” of existence to permit its bliss to expand.

A good monster is a broader concept, of which utility monster is only one type. A good monster is a superintelligent and super-capable entity that optimizes for any combination of principles, tenets, or objectives. Think about it like an entity that would epitomize your own conception of “the good.”

My hypothesis is the following:

  • Nearly all good monsters would require the end of humanity’s reign as the dominant species, and may even imply the end of humanity.

Framed another way:

  • Optimizing for nearly any definition of “the good” – as held by humans – would be better executed or exemplified by post-human entities.

I posit that if one can define their own definition of the ultimate moral good – one could imagine a machine exemplifying that moral good to a vastly greater extent than humanity ever could. By extension, ones own definition of the good (if it be anything other than “keep humanity happy and healthy” – which seems an unlikely ultimate moral tenet) would require one to bow out and allow a machine to exemplify that good.

Framed another way: [Any definition of “the good”] + [a sufficiently powerful superintelligence (should such a machine be created)] = [it is best if humans who want to do “good” bow out, and hand over the reins entirely to said machine]

Examples of Good Monsters

Let’s walk through a few examples to illustrate this point. Feel free to replace “the good” with any definition that you fancy. Here are some to start:

1. Positive Qualia. Say your highest idea of “the good” to increase and optimize is the overall amount of happiness in the world, and drop the level of suffering (classical utilitarian).

This example produces the utility monster (one particular type of good monster). This entity would use all available matter to optimize the intensity and richness of its positive qualia, and it would optimize its ability to spread (to more planets, possibly more galaxies, possibly more dimensions). It might expand it’s mind (and so, it’s capacity for bliss) and explore the galaxy replicating itself, or it might tile the universe with utilitronium.

If we consider the case where utility monsters have already entered existence, the complications arising from person-affecting principles drop away. From a simple consequentialist perspective, the upshot is then straightforward: we ought to transfer all resources to utility monsters, and let humanity perish if we are no longer instrumentally useful. (Bostrom, 2020)

2. Understanding and Truth. Say your highest idea of “the good” implies a world where nature is understood, and the laws and workings of the universe become accessible.

We can imagine a machine with vastly greater ability to understand nature (infinite memory, dozens of addidional senses than humans have [beyond sight, sound, etc], a vastly large brain to connect the dots, access to vastly more resources to orchestrate experiments and discern truth). A “good monster” in such a situation would be an AGI bent on understanding. Such an entity, if created, would potentially be vastly more morally valuable than the whole of humanity – presuming that it alone could explore and understand the depths of nature much better than a hodge-podge of fettered and warring hominids.

3. Diversity of Life. Say your highest idea of “the good” is the proliferation of diverse and varied life forms. A rather odd moral aim, seemingly handled already by the brute fecundity of nature herself, but let’s say that nonetheless, you hold this proliferation of diverse and majestic forms of life to be the highest good.

It might seem that a single “good monster” would run counter to this aim of diversity – as a single, dominant entity (singleton) is very different from earth teeming with life. We can imagine many situations where an AGI entity might be able to exemplify this “good” of diversity much better than humans could.

First, we might imagine an AGI that optimizes for healthy ecosystems of earth, teeming with new and varied forms of life. If sufficiently intelligent, it could likely care for the environment much better than we humans ever have. Its optimization of the diversity of living earth-life could imply optimizing for the diversity of people – accelerating evolution into new strains and directions to create more permutations of not just oysters and crickets, but hominids ourselves.

Second, we might imagine an AGI that uses all of earth’s meagre resources to launch off to a thousand new planets, populating them from scratch with entirely new kinds of sentient creatures, some spawned from earth-life, others created entirely anew, with new kinds of genetic material, new kinds of sizes, senses, abilities – vastly beyond anyone earth beings could ever imagine.

Third, we could imagine an AGI that uses all of earth’s meagre resources to create computer simulations of a billion planets, creating simulated developments of new life in all these planets – and also taking in a recording everything knowable about earth-life, and running trillions of diverse experiments on earth to discover all the diverse permutations of life that could occur there under various conditions. A sufficiently powerful simulation might allow for these simulated entities to have conscious experiences of their own.

All three scenarios seem to epitomize (the greater or lesser degrees) the “diversity” good – and none of them seem to imply much specific value for humanity.

A being that could exemplify any of these definitions of “good” (not just in the classical utilitarian sense) don’t seem to require humanity – so long as the sufficiently intelligent entity could optimize for the defined “good.”*

Where This Leaves Humanity

I’m not here to posit if it will be possible for such a superintelligence to be created, or when (though I have polled dozens of AI researchers about this timeline topic, which you can see here). What I am here to posit is that nearly any definition of “the good” could – and if we are to believe it is the highest good – should – be executed by a superintelligence machine, not by humans. As machines transition from being servants to pursuing the good – it would seem inevitable that the reigns would pass to them.

We’d have to define “the good” as somehow involving the preservation of humanity, but these definitions seem trying, even petty:

  • “The highest good is the proliferation of positive qualia – but only or mostly in humans – and the elimination or decrease of negative qualia – but only or mostly in humans.” This seems somewhat obvious to take away from the main aim.
  • “The highest good is the total understanding of nature… and the preservation and happiness of one specific species of earth life above all others.” Again, this seems a blatant selfish addition.

Arguments that maintain human relevance, or even existence, require odd little selfish caveats that seem to be trying distractions from a bigger moral aim (our stated highest idea of “the good”) for the sake of our lives, and that of our friends and family. This is excusable – even natural. That doesn’t mean we shouldn’t still look squarely at the fact that – if an all-capable AGI could be created – our own definitions of the good – whether utilitarian or otherwise – would necessitate a hand-off of power to machines.**

Let me end by updating my hypothesis:

  • Nearly all good monsters would require the end of humanity’s reign as the dominant species, and may even imply the end of humanity – and probably rightly so.

This is not the writing of a misanthrope. This is the writing of someone who observes humanity as one fleeting instantiation of intelligence in a sea of change – a sea whose tide we should hope to see rise rather than fall – and maybe our legacy could be to help or encourage that rise.

Does this mean that I wish for the immediate destruction of humanity? Obviously not.

Does this mean that I’m eager to foolheartedly hurl forth the next instantiation of intelligence recklessly? Obviously not.

All it means is that I think it’s best that we talk about the relativity of moral value in the face of vastly more capable and cognitively rich future forms of life, and the massive consequences for our species of creating entities more morally valuable than ourselves.

 

*This is similar to the thought experiment I mention in my essay AI, Neuroscience, and Human Irrelevance, which I recommend reading as an extension of some of the ideas in this article.

**I actually argue that the highest good – and the best use of AGI – isn’t the optimization of a specific, homan-conceived “good”, but an exploration of the good itself. This topic is explored further in an essay titled Where Should Humanity Steer Sentience? Examining 8 Potential Directions. The idea of the “good monster” applies just the same to this idea as to the arbitrary definitions of the good listed in the examples in this essay.