Jaan Tallinn – The Case for a Pause Before We Birth AGI [The Trajectory Series 1: AGI Destinations, Episode 2]

Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan Tallinn.

My first interview with Jaan was 5 years ago in my AI in Business Podcast (here: Ensuring a Positive Posthuman Transition) during this time AGI was far off into the distance. Today, many consider AGI to be a near-term possibility, and Jaan’s focus has shifted significantly towards risk mitigation since my last chat with him.

The interview is the second in The Trajectory’s first series AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.

I hope you enjoy this conversation with Jaan Tallinn: 

In this article, I’ll explore Jaan’s position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.

Jaan Tallinn on the Intelligence Trajectory Political Matrix

The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.

Jaan was very careful not to place himself in any specific cell – even with some repeated encouragement in reminding him that placements are intentionally temporary, and aren’t permanent labels. He is rather clear in advocating against ravenous AGI advancement, and slowing down development and open-source initiatives to define a kind of global governance structure before pushing ahead on AGI.

That said, my best tentative guess for Jaan’s position would be in B2, based on his perspectives shared in our episode. His opinion on risk and open source seem to map relatively well with our first guest in the series, Dr. Yoshua Bengio.

Interview Takeaways

1 – Balancing AGI’s Potential Upside with its Potential for Destruction

Jaan Tallinn discusses Scott Aaronson’s “Faustian Parameter” wherein he’d take a probability of 2% risk of existential catastrophe in exchange for a 98% chance of a glorious future for earth life.

Jaan is clear that he’d like to have a “better deal” than 2% risk of all humans being killed. He advocates for a pause on the AGI race, and an opportunity to more thoroughly explore pathways of development that might give us a higher chance of a glorious future (for humans and posthuman life), and an even lower probability of total annihilation.

2 – Ideas on The Veto Committee

Jaan mentions (31:26 in the interview above) that there are an unfortunately small number of companies (AGI labs) who are taking risks with the future of all living things – and that we should have a more principled mechanism for deciding how much risk we’re taking in pursuit of AGI.

As a potential idea to aide in such a Veto Committee, Tallinn mentions MACI (Minimal Anti-Collusion Infrastructure) – an on-chain voting platform that protects privacy and minimizes the risk of collusion and bribery – as a potential way to making global governance more participatory and less abusable.

I’m grateful to have had Tallinn as episode 2 in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.

Follow The Trajectory