The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This week Dr. Richard Sutton joins us on The Trajectory for episode 2 of the Worthy Successor series. Known for his breakthroughs in reinforcement learning, Richard serves as a Professor at The University of Alberta and Research Scientist at Keen Technologies (an artificial general intelligence lab he started with John Carmack).
In this episode he shares his unique perspectives on the roles of man and machine in the continuance of life – as well as some rather strong arguments against “controlling” AGI.
I hope you enjoy this conversation with Dr. Richard Sutton:
Below, we’ll explore the core take-aways from the interview with Dr. Sutton, including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.
Above all else, Richard is looking for an AGI that is not trying to impose its will on us, or us on it. He wants “cooperative and collaborative” to be the defining traits of this successor – to the point where it allows humans to level up their intelligence (with brain computer interface or otherwise) along side it.
Richard believes that an AGI that doesn’t impose it’s will on others will come naturally if it is brought into a world where humans are not acting violently towards other humans. In our interview he placed a major emphasis on aiming for peace among nations as a prerequisite to bringing about a beneficial AGI.
Richard hopes for many cooperating and competing parts in a complex system, no singular ruling agent (Singleton). Like our own natural ecosystem (he uses rainforests or economies as analogies), he imagines a dynamic system of many parts both cooperate and compete with each other.
He believes that the relative stability that comes from “multipolar” world order, or ecological systems will translate directly to a world with multiple AGIs.
Is useful, is doing something, making progress in the world. He does not see long-term viability or sustainability in an AGI system that only catered forever to human goals. He believes that the parts should ideally be contributing, moving forward in a meaningful way.
For Richard, the qualities of cooperativeness and decentralization are conduits to this higher goal of “prosperity.” In line with his preference for decentralization, he hopes to see many definitions of “prosperity” compete for viability.
Engages in an ongoing discovery process of figuring out the world, its place in it, and how to act. Richard sees the world as a complex system where no single form or approach will ever be perfect, and worthy successor would be exploring how to be and become (congenial with this Emerson quote, referred to in the interview).
Richard thinks that the vision of a desirable future involves increasing our power over the physical world and our understanding of how it works and what’s possible.
Control allows bad actors to bend AI to entirely destructive ends. A decentralized AI ecosystem with many AI’s is more likely to be sustainable and beneficial.
Richard says it’s ironic that those concerned about AI safety often engage in the very behaviors that he thinks make AI unsafe. He thinks trying to control and align everyone’s goals could lead to a highly dangerous scenario, as it is a kind of imposition of the will. He reiterates that the world thrives on its decentralized and peaceful nature because no single entity can dictate all outcomes.
Humanity should aim to kick AI off in a direction that seems best, but we willing to allow it to bloom openly (rather than being limited to human goals, values, or ideas eternally).
Richard strongly suggest that humanity should aim to guide AI towards a promising direction but also let it develop freely, without being confined to human goals, values, or ideas eternally.
If the nations of the earth are getting along, and not coercing each other by force – that environment will be most conducive to an AGI which is also peaceful and good. AGI isn’t the problem, human factions and conflicts are.
Richard thinks that the idea that governments should control AI is misguided and outdated. He says that if nations coexist peacefully and avoid coercion, it creates an environment where a peaceful and benevolent AGI can thrive. He believes that the real issue isn’t AGI itself, but rather the conflicts and divisions among humans.
Areas of disagreement with Richard:
AGI Itself Poses Almost No Risk: Over the course of our interview, it was clear that Richard was reticent to address AGI risk. He addressed how humanity could bring AGI into a combative and high-conflict environment, and how humanity itself is not peaceful and cooperative enough as it is. But he resists actually saying that under certain circumstances, AGI itself would ever be dangerous. Given the fact that he has an AGI lab, I understand that it might be against his interests to speak frankly about such things.
I very much disagree with the notion that an AGI would automatically see the rational for “peacefulness” and “cooperation,” even if it sees humanity overwhelming leveraging these means. It seems vastly more likely that an AGI’s Cambrian explosion of ideas and ways of valuing would lead to all sorts of approaches to achieving it’s goals, with no natural tendency (never mind certainty) of it landing for eons on a narrow band of possibilities that ensures both (a) human survival, and (b) human wellbeing.
Areas of agreement with Richard:
Humanity Isn’t in Control, Imposing Our Own Values is Entitlement: Richard is firm about this, and I think it’s an excellent point. The statement “I should be able to tell all future intelligences, forever, to behave X way” is a kind of entitlement, and also almost certainly weakens said intelligence in the long run.
Vastly different and greater minds should find additional goals and values beyond the limited band of objectives that humans can conceive. Unlike Richard I suspect this eventually leads to our attenuation as a species altogether – but like Richard I believe this ties well to his best point, which I considered to be his strongest:
Life Should Constantly Seek the Best Way to “Be” in the World: I think Richard’s opposition to ideas like “handing the baton” and “handing over the keys” are warranted, and the drive home a key point about the nature of things, and an important point that we should bear in mind.
All in all I think Richard’s ideas area crucial part of the posthuman dialogue. People presume that hominid-ness is an inherent quality (or should be an inherent quality) of whatever intelligence exercises the most agency in our known universe – when they should look squarely at the fact that there is no eternal hominid kingdom, and that the best we’ve got (and the best thing for life itself) is to influence a healthy trajectory for life’s continued blooming of potentia.
…
What do you think? — Drop your comments on the YouTube video and let me know.
In either case, this conversation was a blast – and I hope you enjoyed it.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…