The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Episode 4 of The Trajectory is Google DeepMind Researcher, former Co-founder of Vicarious AI, Dileep George.
The interview is the fourth episode in a 5-part Trajectory series titled AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.
I interviewed Dileep about his work with Vicarious AI five or six years ago (podcast here). In this interview, we get to go a lot deeper into perspectives about the purpose and direction of artificial general intelligence.
I hope you enjoy this conversation with Dileep George:
In this article, I’ll explore Dileep’s position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.
The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.
Dileep could be placed around C2. He recognizes the potential for collaboration and advancement in emerging technologies and artificial intelligence – but his aim is for these new AGI agents to serve the goals of humans. His goal is not to create better intelligence that will make humans obsolete. He believes there will come a time AGI will be a boost to real acceleration but due to nature’s constraints, it will not go on forever.
We’ll have a functional equivalent of dividing acceleration by zero from our own experiences and instantiated hominids. Dileep believes that’ll happen if AGI acceleration is straight-up infinite versus some rate – it’s actually quite a big difference.
Dileep George doesn’t believe that artificial general intelligence will have any kind of moral value because it’ll have back-ups, and will not be able to have children. This replicate-ability and eternal life would, in Dileep’s opinion, lack any kind of mortality as we understand it – and so would warrant no register for human morality.
Its experience will be very different from human experience, and Dileep suspects that while we might “marvel” at the varieties of new experiences it might have – we ought not feel any empathy, or associate such feelings with a morally relevant “sentience.”
(For the record, this is probably the greatest area of difference in opinion between myself and Dileep. I believe that should AGI become conscious and have wider potentia than human beings, it should very much be morally valued.)
When it comes to governance, Dileep really sets value in an iterative approach to governance where AI is able to go out into the world, explore, and find a fit in terms of its value. We can determine if impacts are in government along that journey, as opposed to pre-emptively laying out governance too early.
Dileep makes the analogy of airborne flight when he talks about how in the early years, people thought that zeppelins were going to be the most common mode of air transportation. If we had built all of the airline regulation rules around zeppelins, we would’ve been doing a very bad job of handling governance for air flights.
LLM’s might be a lot like zeppelins and not like whatever becomes the more impactful AI of the future – and Dileep believes that we should experiment carefully and consistently in order to feel out the edges of where governance belongs or doesn’t belong.
…
I’m grateful to have had Dileep as episode 4 in this series.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…