The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
When it comes to cognitive architecture, philosophy and AGI, few thinkers are as well-versed as Joscha Bach. Previously Principal AI Engineer, of Cognitive Computing at Intel, today he serves as AI Strategist for Liquid AI.
Famously, Joscha has long argued for the moral status of AGI – making him an excellent fit for the sixth and final of the Worthy Successor series.
In this episode Joscha discusses the traits he hopes to see in an AGI, his unique perspective on the possible forms of future machine consciousness, as well as his staunch opposition to near-term AGI governance. He makes an interesting argument for the relative un-importance of qualia (positive or negative sentient experience) in machines, and he explores what it means for AGI – and the humans that create them – to “play the longest game possible.”
I hope you enjoy my conversation with Joscha Bach.
Below, we’ll explore the core take-aways from the interview with Joscha including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.
We should ensure that what we’re building is truly agentic and sentience, and is not simply faking some proxy for these important qualities. Such a “golem” could make the world uninhabitable for humans.
Its consciousness would be vastly more rich and complex than the mono-focused mammal consciousness that we experience today.
By harnessing energy and wielding control over its environment, it would continue to build more complexity (a process that Bach considers to be the possible purpose of life).
For Joscha, positive or negative qualia should be insignificant to an AGI. The distraction of self-generated emotional states wouldn’t prevent an ideal AGI from assessing its situation and taking action.
Before reaching AGI, we should have a firm understanding of consciousness itself. Allowing consciousness to bubble up arbitrarily from the pursuit of a for-profit enterprise may lead to horrible suffering.
Bach mentions a handful of domains (1:31:30) where current AI’s might de-anonymize medical data, or where AI could impersonate people for nefarious reasons. He believes that law may need to be modified to accommodate these next applications.
AI is valuable for helping to avoid civilizational catastrophe (“doom”) moreso than it is a conduit to such catastrophe. We should avoid any total political control of AI, or halting efforts that would prevent important near-term benefits.
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…