The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan Tallinn.
My first interview with Jaan was 5 years ago in my AI in Business Podcast (here: Ensuring a Positive Posthuman Transition) during this time AGI was far off into the distance. Today, many consider AGI to be a near-term possibility, and Jaan’s focus has shifted significantly towards risk mitigation since my last chat with him.
The interview is the second in The Trajectory’s first series AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.
I hope you enjoy this conversation with Jaan Tallinn:
In this article, I’ll explore Jaan’s position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.
The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.
Jaan was very careful not to place himself in any specific cell – even with some repeated encouragement in reminding him that placements are intentionally temporary, and aren’t permanent labels. He is rather clear in advocating against ravenous AGI advancement, and slowing down development and open-source initiatives to define a kind of global governance structure before pushing ahead on AGI.
That said, my best tentative guess for Jaan’s position would be in B2, based on his perspectives shared in our episode. His opinion on risk and open source seem to map relatively well with our first guest in the series, Dr. Yoshua Bengio.
Jaan Tallinn discusses Scott Aaronson’s “Faustian Parameter” wherein he’d take a probability of 2% risk of existential catastrophe in exchange for a 98% chance of a glorious future for earth life.
Jaan is clear that he’d like to have a “better deal” than 2% risk of all humans being killed. He advocates for a pause on the AGI race, and an opportunity to more thoroughly explore pathways of development that might give us a higher chance of a glorious future (for humans and posthuman life), and an even lower probability of total annihilation.
Jaan mentions (31:26 in the interview above) that there are an unfortunately small number of companies (AGI labs) who are taking risks with the future of all living things – and that we should have a more principled mechanism for deciding how much risk we’re taking in pursuit of AGI.
As a potential idea to aide in such a Veto Committee, Tallinn mentions MACI (Minimal Anti-Collusion Infrastructure) – an on-chain voting platform that protects privacy and minimizes the risk of collusion and bribery – as a potential way to making global governance more participatory and less abusable.
…
I’m grateful to have had Tallinn as episode 2 in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…
Episode 4 of The Trajectory is Google DeepMind Researcher, former Co-founder of Vicarious AI, Dileep George. The interview is the fourth episode in a 5-part Trajectory series titled AGI Destinations,…