The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This week Anders Sandberg joins us on The Trajectory for episode 3 of the Worthy Successor series. Anders – famously with the Future of Humanity Institute at Oxford for nearly 20 years – has a PhD in Computational Neuroscience and now serves as a Researcher at the Mimir Center for Long-Term Futures Research.
In this episode, he shares his take on what it means to “explore and expand value” – and how humanity might calibrate AGI’s emergence to help to ensure that such value is explored. Anders touches on the idea of moral value in ways few thinkers can – and this episode unpacked a lot of what might make artificial superintelligent life worth creating.
I hope you enjoy this conversation with Anders Sandberg:
Below, we’ll explore the core take-aways from the interview with Anders including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.
The state-space of possible life (intelligence, consciousness, powers) is explored as much as possible in a rich ecosystem “containing and generating all the kinds of value that can possibly exist.”
We should accept that most of this value is beyond human experience or imagination.
We should aim to nudge the trajectory of this AGI blooming away from purely destructive or purely suffering-inducing pathways, if possible.
This ecosystem of future super-life should go on indefinitely or as long as possible.
An AGI bound to care for us will almost certainly limit our own ability to create and explore new things – and will certainly limit its own goals (to serve those of humans).
We should innovate and regulate in a way to allow for our safety and participation, but for life to expand beyond merely serving us.
One moral tradition, one understanding of the universe, or one “approach” is limiting. A being like that might be more powerful than us, but unable to continuously discover better ways to tackle problems.
There are many ways to organize and act – some of which permit more explorations, some of which require caution. We shouldn’t be enamored with one “way” of governing but should feel out the space of coordinative systems that suit our next steps forward to post-humanity.
…
What do you think of this episode with Anders?
Drop your comments on the YouTube video and let me know.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…