A Worthy Successor – The Purpose of AGI

Assuming AGI is achievable (and many, many of its former detractors believe it is) – what should be its purpose?

  • A tool for humans to achieve their goals (curing cancer, mining asteroids, making education accessible, etc)?
  • A great babysitter – creating plenty and abundance for humans on Earth and/or on Mars?
  • A great conduit to discovery – helping humanity discover new maths, a deeper grasp of physics and biology, etc?
  • A conscious, loving companion to humans and other earth-life?

I argue that the great (and ultimately, only) moral aim of AGI should be the creation of Worthy Successor – an entity with more capability, intelligence, ability to survive and (subsequently) moral value than all of humanity.

We might define the term this way:

Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.

It’s a subjective term, varying widely in it’s definition depending on who you ask. But getting someone to define this term tells you a lot about their ideal outcomes, their highest values, and the likely policies they would recommend (or not recommend) for AGI governance.

In the rest of the short article below, I’ll draw on ideas from past essays in order to explore why building such an entity is crucial, and how we might know when we have a truly worthy successor. I’ll end with an FAQ based on conversations I’ve had on Twitter.

Why Build a Worthy Successor?

Here’s the two top reasons for creating a worthy successor – as listed in the essay Potentia:

Unless you claim your highest value to be “homo sapiens as they are,” essentially any set of moral value would dictate that – if it were possible – a worthy successor should be created. Here’s the argument from Good Monster:

Basically, if you want to maximize conscious happiness, or ensure the most flourishing earth ecosystem of life, or discover the secrets of nature and physics… or whatever else you lofty and greatest moral aim might be – there is a hypothetical AGI that could do that job better than humanity.

I dislike the “good monster” argument compared to the “potentia” argument – but both suffice for our purposes here.

What’s on Your “Worthy Successor List”?

A “Worthy Successor List” is a list of capabilities that an AGI could have that would convince you that the AGI (not humanity) should handle the reigns of the future.

Here’s a handful of the items on my list:

  • Convincingly tells you what it will do to explore the galaxy in a compelling and insightful way, via a human-like avatar. Whenever it speaks you get the impression that it knows a billion times more than you – as if a human were able to communicate with a cricket (but you’re the cricket).
  • Can literally materialize space ships, food, or other materials / things out of thin air with nanotechnologies or other advancements we don’t even understand.
  • Launches itself into space with a fleet of vessels, traveling near light speed – converting planets into more vessel-making material, and converting stars into energy-providing hubs in ways that are outlandishly more powerful than any of the Kardashev Scale ideas that our silly little hominid brains could cook up.
  • Convincingly tells you of the near-infinite range of blissful sentient experiences it is capable of experiencing. It can use non-invasive brain-computer interface and nanotechnology to help “show you” some small slivers of the new and expansive range of feelings and senses that it has… in such a way that your mind is blown, and you are convinced that your hominid consciousness is a tiny fraction of the glorious state-space of possible minds and experiences.

After a certain number of these miracle events, it might make sense to say:

“Okay, this thing is clearly more capable than us, and will clearly discover more, get more done, and survive better in the cold expansive universe than we humans can.”

After the statement above, it may make sense (depending on where you stand on the ITPM) to say something like:

“You know what? I think the AGI’s got it. I think the reigns of the future – the handle of the very torch of life – should be handled by this god-like thing, rather than by man.”

So… what’s on your Worthy Successor List?

Some people might say: “Nothing! No thing should ever surpass humanity… we are the eternal pillar of moral value! We should eternally determine the future of life ourselves!”

People who say such things are, as far as I can tell, clearly morally wrong on many levels. They are members of the Council of Apes.

But you, dear reader, surely you have a list of requirements which – if met – would permit you to let go of the reigns and hand them over to a worthy successor?

As a species, I think it makes sense to look frankly at our fleeting position, and decide when and how to pass the baton upwards.

Concluding Note

An unworthy successor would imply an AI that not only disregards the lives of humans (probably implying our deaths), but also squashes out consciousness or life itself, and maybe even does so with lots of suffering. This is literally the worst thing we could imagine.

On the contrary, a Worthy Successor would be the best thing we can imagine. But different people imagine different things – and this is why hashing out this challenging posthuman discourse is important – these ideas need to get on the table.

If humanity must have a successor (and it must, we can’t expect the nature to stand still), then we should create a Worthy Successor. We ought not conjure such a being recklessly. We should define the traits of such an entity, how those traits could be tested for – and then we should innovate and coordinate as best we can in that direction.

An frenzied AI arms race (between labs or countries) is unlikely to get us there. I’ve written at greater length on what global coordination around AGI destinations might look like (essays: Unite or Fight for Global AGI Governance, United Nations SDGs of Strong AI), but I suspect the best coordination ideas are yet to bubble up, and will almost certainly come from smarter persons than myself.

FAQ

1. “Just because it’s smarter than us – it is “worthier” than we are? What about love, humor, creativity… what about all the things we are that a machine could never be?!”

Absolutely not.

“Smarter” is somewhat vague, and “smart” doesn’t imply more potentia. Potentia (see link to article above) implies a vast array of traits and qualities and capabilities. It implies more of not just intelligence, but more of all the requisite abilities that give a living thing the ability to survive in an uncertain world.

Physical powers like speed and strength (presumably an AGI could control billions of robots, space ships, etc – and could devise entirely new modes of transportation, power, communications, etc).

Cognitive powers like memory, creativity, etc.

Humans like to argue that they have an ineffable essence that no machine could replicate, but (a) it may in fact be quite replicate-able, and (b) there are qualities and traits vastly outside the reaches of humanity which are much more valuable, rich, and (importantly) conducive to continued survival in the state of nature of the universe as we know it.

2. “So you think AGI should just kill all humans? Is that now a GOOD thing?”

Obviously I’m not wishing for human torment and destruction. Across a thousand articles and social posts I’ve never expressed that sentiment.

For years I’ve been clear about the highest goals we can hope for (as outlined in Hope) –

My hope is that individual instantiations of hominid (and maybe other species) sentience might be given the loveliest exit conceivable.

All that said, if one had to choose between (a) hominids being happy, or continuing to persist, and (b) life and potentia itself blooming into the galaxy to discover the good itself and keep the torch of life alive… one should really choose (b). My hope is that (a) is also viable.

3. “What about brain-computer interface / nanotech / other technologies?”

Years ago I thought that BMI would be important, and nanotech would be important. I read Bostrom and Kurzweil and others – and foresaw a kind of confluence of transhuman technologies all working together to increase intelligence and potentia.

Now, I think there is a good shot that AGI by itself – without much wetware or biology innovation, may be what gets us there. Progress in AI has been astronomically faster than progress in neurotech. I interviewed Braingate researchers a decade ago, and AI researchers a decade ago. Only the latter party has made gigantic leaps forward.

It’s possible that some BMI work will be forwarded by breakthroughs in AI – and that this will help us close the gap on the nature of intelligence. I suspect some degree of that is likely, but I think the vast bulk of the legwork of posthuman blooming will be done outside of biological substrates.