Anders Sandberg – Blooming the Space of Intelligence and Value [Worthy Successor Series, Episode 3]

This week Anders Sandberg joins us on The Trajectory for episode 3 of the Worthy Successor series. Anders – famously with the Future of Humanity Institute at Oxford for nearly 20 years – has a PhD in Computational Neuroscience and now serves as a Researcher at the Mimir Center for Long-Term Futures Research.

In this episode, he shares his take on what it means to “explore and expand value” – and how humanity might calibrate AGI’s emergence to help to ensure that such value is explored. Anders touches on the idea of moral value in ways few thinkers can – and this episode unpacked a lot of what might make artificial superintelligent life worth creating.

I hope you enjoy this conversation with Anders Sandberg:

Below, we’ll explore the core take-aways from the interview with Anders including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.

Anders Sandberg’s Worthy Successor Criteria

1.  Life Should Bloom in Varied and Expansive Ways.

The state-space of possible life (intelligence, consciousness, powers) is explored as much as possible in a rich ecosystem “containing and generating all the kinds of value that can possibly exist.”

We should accept that most of this value is beyond human experience or imagination.

We should aim to nudge the trajectory of this AGI blooming away from purely destructive or purely suffering-inducing pathways, if possible.

2. Infinite Expansiveness.

This ecosystem of future super-life should go on indefinitely or as long as possible.

Regulation / Innovation Considerations – Anders Sandberg

1. We shouldn’t bind it only to our goals.

An AGI bound to care for us will almost certainly limit our own ability to create and explore new things – and will certainly limit its own goals (to serve those of humans).

We should innovate and regulate in a way to allow for our safety and participation, but for life to expand beyond merely serving us.

2. Ensure that AGI is open-ended and not optimizing for one thing.

One moral tradition, one understanding of the universe, or one “approach” is limiting. A being like that might be more powerful than us, but unable to continuously discover better ways to tackle problems.

3. We should deliberately explore coordination systems for man and machine.

There are many ways to organize and act – some of which permit more explorations, some of which require caution. We shouldn’t be enamored with one “way” of governing but should feel out the space of coordinative systems that suit our next steps forward to post-humanity.

What do you think of this episode with Anders?

Drop your comments on the YouTube video and let me know.

Follow the Trajectory