Ben Goertzel – Regulating AGI May Do More Harm Than Good [The Trajectory Series 1: AGI Destinations, Episode 3]

Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel.

The interview is Episode 3 in The Trajectory’s first series AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.

Ben and I disagree about a lot of things – including the including the nature of man, and the likelihood of AGI being friendly to humanity – but I’ve followed his work actively and consider his thinking (including many of his ideas in Cosmist Manifesto) to be prescient and important.

I hope you enjoy this conversation with Ben Goertzel: 

In this article, I’ll explore Ben’s position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.

Ben Goertzel on the Intelligence Trajectory Political Matrix

The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.

While Ben and I didn’t overtly pin him on the ITPM during our dialogue, his position has long been clearly in the C3-ish camp. He believes that Kurzweil’s 2029 prediction for AGI is probably correct – and that the takeoff to superintelligence will likely occur shortly thereafter. He advocates for a laissez-faire approach to governance, and he has faith in open source AGI becoming the preference for users, and a natural force in leveling the playing field against a handful of big tech leaders controlling AGI.

Ben believes that current governance bodies (including the United Nations) would likely do much more harm than good in restricting or binding AGI’s development. This belief seems pretty clearly bolstered by Ben’s optimism around AGI’s probably “friendliness” – a topic that he and I have long disagreed about.

Interview Takeaways

1 – Political Correctness and The Popularity of AI Regulation

Ben brings up an interesting point about “AI governnace” appearing “good” and “virtuous,” while a stance of unimpeded development is being framed in the public discourse as reckless. Expressing concern about AI can make you seem like a conscientious person, leading many to speak up.

Yet if you lean toward laissez-faire and prefer to focus on your own work without interference, you’re not likely to make a fuss. You’ll just quietly get on with building. As a result, the loud voices are overrepresented in regulation.

2 – Most of The Populace will be Ambivalent About AGI

Many policy thinkers and innovators I’ve spoken to have considered it obvious that most people will oppose AGI or see it as a threat. Ben thinks that this is unlikely, and that people will mostly just care about what AGI can do for them. He suspects that most people won’t see it as an alien god. Rather, they’ll do what makes sense, what’s convenient, and what’s useful. There is no principled moral approach here.

Two of Ben’s quotes from the interview:

  • “How is the AGI’s mind designed (cognitive architecture)” “Who owns and controls AGI” “How is the AI being shaped” – Ben mentioned that these are the elements to get to a place where people get to choose the direction of their ascension or not.
  • “The roll-out of mobile phones and prescription medications is part of an argument that it will be available to every single person” – Initially it’s not available to the public. The same goes with AGI, if it’s well disposed to humanity, I would expect the benefits to be widespread.

I’m grateful to have had Goertzel as episode 3 in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.

Follow The Trajectory