Joscha Bach – Building an AGI to Play the Longest Games [Worthy Successor, Episode 6]

When it comes to cognitive architecture, philosophy and AGI, few thinkers are as well-versed as Joscha Bach. Previously Principal AI Engineer, of Cognitive Computing at Intel, today he serves as AI Strategist for Liquid AI.

Famously, Joscha has long argued for the moral status of AGI – making him an excellent fit for the sixth and final of the Worthy Successor series.

In this episode Joscha discusses the traits he hopes to see in an AGI, his unique perspective on the possible forms of future machine consciousness, as well as his staunch opposition to near-term AGI governance. He makes an interesting argument for the relative un-importance of qualia (positive or negative sentient experience) in machines, and he explores what it means for AGI – and the humans that create them – to “play the longest game possible.”

I hope you enjoy my conversation with Joscha Bach.

Below, we’ll explore the core take-aways from the interview with Joscha including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, worthy.

Joscha Bach’s Worthy Successor Criteria

1. It should be conscious.

We should ensure that what we’re building is truly agentic and sentience, and is not simply faking some proxy for these important qualities. Such a “golem” could make the world uninhabitable for humans.

Its consciousness would be vastly more rich and complex than the mono-focused mammal consciousness that we experience today.

2. It should build complexity.

By harnessing energy and wielding control over its environment, it would continue to build more complexity (a process that Bach considers to be the possible purpose of life).

3. It should be unencumbered by happiness or suffering.

For Joscha, positive or negative qualia should be insignificant to an AGI. The distraction of self-generated emotional states wouldn’t prevent an ideal AGI from assessing its situation and taking action.

Regulation / Innovation Considerations

1. We should push for research to understand consciousness.

Before reaching AGI, we should have a firm understanding of consciousness itself. Allowing consciousness to bubble up arbitrarily from the pursuit of a for-profit enterprise may lead to horrible suffering.

2. Regulate new near-term uses of AI that violate existing laws.

Bach mentions a handful of domains (1:31:30) where current AI’s might de-anonymize medical data, or where AI could impersonate people for nefarious reasons. He believes that law may need to be modified to accommodate these next applications.

2. Avoid efforts to halt AI progress entirely.

AI is valuable for helping to avoid civilizational catastrophe (“doom”) moreso than it is a conduit to such catastrophe. We should avoid any total political control of AI, or halting efforts that would prevent important near-term benefits.

Follow The Trajectory