The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This week Scott Aaronson joins us on The Trajectory for episode 4 of the Worthy Successor series.
Scott is a theoretical computer scientist and Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin who recently completed a year-long stint as an AGI researcher with OpenAI. I was influenced to invite Scott to the program after seeing his TEDx talk The Problem with Human Specialness in the Age of AI, and after Jaan Tallinn recommended Scott as a thinker worth following.
In this episode Scott shares perspective on why AGI should evolve, not replace our existing human values. He discusses the possibility of a kind of “moral bedrock” that humans may already have access to, and which AGIs might expand upon.
I hope you enjoy this conversation with Scott Aaronson:
Below, we’ll explore the core take-aways from the interview with Scott, including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.
Posthuman intelligences may be vast extensions from our human preferences, but such human values and preferences are still in some way present.
Their moral values have evolved from ours by some continuous process (rather than displacing said values entirely).
It would carry the flame of awareness / qualia on a new AGI torch.
We could scrub all training data of any mention of consciousness and see if the model is able to articulate what awareness is like, regardless.
(Scott doesn’t mention any specific form of governance in this series, but he advocates that it should be created based on the enormity of what’s being created – an event potentially more consequential than anything else on earth.)
Models requiring beyond some determined level of FLOPs might require some kind of registration process with a governing body – where it might be tested across a variety of criteria to determine risk.
…
I appreciated Scott’s firm emphasis on:
On the topic of morality and “values,” I concur with Scott that – especially initially – it would be important for AGI to branch off from our own human values, rather than boot up a new set of them entirely. Any kind of wholesale replacement risks losing not only things that are “uniquely human,” but things that might be adaptive and useful for future life in general – a sentiment that Bostrom shared in his Worthy Successor interview here on The Trajectory.
Regarding Scott’s notion that intelligence may “cap out” at moral ideas similar to those of humans, I disagree completely, as I suspect that a mind a billion times beyond our own, working on problems vastly beyond our own, housed in substrates and manifested in embodied forms vastly beyond our own, would likely have values wholly alien to us. I suspect that thinking AGI would – with any reasonable likelihood – converge on human values or human-friendly values is probably a dangerous idea.
All that said, Scott’s point about there being a kind of “moral bedrock” may have credence, and I suspect time will tell. Given Scott’s insistence on seeing things as they are, I’d guess that his perspectives will evolve as new experiments come in. As will mine.
What did you think of this episode with Scott?
Drop your comments on the YouTube video and let me know.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…