Dileep George – Keep Strong AI as a Tool, Not a Successor [The Trajectory Series 1: AGI Destinations Series, Episode 4]

Episode 4 of The Trajectory is Google DeepMind Researcher, former Co-founder of Vicarious AI, Dileep George.

The interview is the fourth episode in a 5-part Trajectory series titled AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.

I interviewed Dileep about his work with Vicarious AI five or six years ago (podcast here). In this interview, we get to go a lot deeper into perspectives about the purpose and direction of artificial general intelligence.

I hope you enjoy this conversation with Dileep George:

In this article, I’ll explore Dileep’s position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.

Dileep George on the Intelligence Trajectory Political Matrix

The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.

Dileep George - Daniel Faggella

Dileep could be placed around C2. He recognizes the potential for collaboration and advancement in emerging technologies and artificial intelligence – but his aim is for these new AGI agents to serve the goals of humans. His goal is not to create better intelligence that will make humans obsolete. He believes there will come a time AGI will be a boost to real acceleration but due to nature’s constraints, it will not go on forever.

We’ll have a functional equivalent of dividing acceleration by zero from our own experiences and instantiated hominids. Dileep believes that’ll happen if AGI acceleration is straight-up infinite versus some rate – it’s actually quite a big difference.

Interview Takeaways

1 – The Moral Value of an Agent

Dileep George doesn’t believe that artificial general intelligence will have any kind of moral value because it’ll have back-ups, and will not be able to have children. This replicate-ability and eternal life would, in Dileep’s opinion, lack any kind of mortality as we understand it – and so would warrant no register for human morality.

Its experience will be very different from human experience, and Dileep suspects that while we might “marvel” at the varieties of new experiences it might have – we ought not feel any empathy, or associate such feelings with a morally relevant “sentience.”

(For the record, this is probably the greatest area of difference in opinion between myself and Dileep. I believe that should AGI become conscious and have wider potentia than human beings, it should very much be morally valued.)

2 – AGI Governance

When it comes to governance, Dileep really sets value in an iterative approach to governance where AI is able to go out into the world, explore, and find a fit in terms of its value. We can determine if impacts are in government along that journey, as opposed to pre-emptively laying out governance too early.

Dileep makes the analogy of airborne flight when he talks about how in the early years, people thought that zeppelins were going to be the most common mode of air transportation. If we had built all of the airline regulation rules around zeppelins, we would’ve been doing a very bad job of handling governance for air flights.

LLM’s might be a lot like zeppelins and not like whatever becomes the more impactful AI of the future – and Dileep believes that we should experiment carefully and consistently in order to feel out the edges of where governance belongs or doesn’t belong.

I’m grateful to have had Dileep as episode 4 in this series.

Follow the Trajectory