The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel.
The interview is Episode 3 in The Trajectory’s first series AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.
Ben and I disagree about a lot of things – including the including the nature of man, and the likelihood of AGI being friendly to humanity – but I’ve followed his work actively and consider his thinking (including many of his ideas in Cosmist Manifesto) to be prescient and important.
I hope you enjoy this conversation with Ben Goertzel:
In this article, I’ll explore Ben’s position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.
The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.
While Ben and I didn’t overtly pin him on the ITPM during our dialogue, his position has long been clearly in the C3-ish camp. He believes that Kurzweil’s 2029 prediction for AGI is probably correct – and that the takeoff to superintelligence will likely occur shortly thereafter. He advocates for a laissez-faire approach to governance, and he has faith in open source AGI becoming the preference for users, and a natural force in leveling the playing field against a handful of big tech leaders controlling AGI.
Ben believes that current governance bodies (including the United Nations) would likely do much more harm than good in restricting or binding AGI’s development. This belief seems pretty clearly bolstered by Ben’s optimism around AGI’s probably “friendliness” – a topic that he and I have long disagreed about.
Ben brings up an interesting point about “AI governnace” appearing “good” and “virtuous,” while a stance of unimpeded development is being framed in the public discourse as reckless. Expressing concern about AI can make you seem like a conscientious person, leading many to speak up.
Yet if you lean toward laissez-faire and prefer to focus on your own work without interference, you’re not likely to make a fuss. You’ll just quietly get on with building. As a result, the loud voices are overrepresented in regulation.
Many policy thinkers and innovators I’ve spoken to have considered it obvious that most people will oppose AGI or see it as a threat. Ben thinks that this is unlikely, and that people will mostly just care about what AGI can do for them. He suspects that most people won’t see it as an alien god. Rather, they’ll do what makes sense, what’s convenient, and what’s useful. There is no principled moral approach here.
Two of Ben’s quotes from the interview:
…
I’m grateful to have had Goertzel as episode 3 in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 4 of The Trajectory is Google DeepMind Researcher, former Co-founder of Vicarious AI, Dileep George. The interview is the fourth episode in a 5-part Trajectory series titled AGI Destinations,…