David Duvenaud – What are Humans Even Good For in Five Years? [Early Experience of AGI – Episode 1]

This is the first episode in our new Early Experience of AGI series – where we explore the subtle signs of AGI’s rise long before its dominance is declared.

This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of Gradual Disempowerment, and former researcher at Anthropic.

I’ve never spoken with a guest who is so frank about how even their own relationships – including with spouse and children – are potentially at risk of being completely replaced by AI-generated experiences and personalities. Few people look into the void as honestly as David, and that makes this episode especially disturbing and important to listen to.

In this episode, David talks through what it feels like to gradually hand over agency to machines. From creeping reliance on AI tools to the erosion of human input in high-stakes decisions, we explore the earliest moments when AGI begins to take the wheel – quietly, and maybe permanently.

I hope you find this conversation with David to be as candid and unsettling as I did:

David Duvenaud’s Ideas on Early AGI Experience

Questions from this episode:

1. What parts of society and life do you expect to fundamentally shift, and how, in the early days of AGI?

Parenting

  • Access to the “best dad in the world” – for everyone
    David says that even without advanced sensors or biofeedback – which he agrees is probably going to happen – the best dad in the world is probably a better dad than he is by a lot. And if everyone could have access to that kind of parenting through AI, that would be a big improvement.
  • Custom, dynamic play experiences will come from machines
    He imagines a near-future where a child’s AI assistant invents personalized games tailored to everyone’s age and skills – and even joins in, offering fun suggestions in real time. “We had such a great afternoon with RoboDad,” he imagines the reflection might go.
  • Parental presence may become optional
    David imagines a moment of realization: you could leave the kids with RoboDad – and they might have an even better time without you. And even if you are there, you may just feel like a puppet being steered by the machine.

Spousal Relationships

  • AI-aided relationships might help – or harm
    Partners who want to grow together using AI-aided couples therapists might find it a genuinely positive path forward. But David cautions that this must be a conscious choice, and if we choose wrong, and without vigilance or awareness of evolving influences, the same tech could undermine the relationship, making true intimacy harder to preserve.
  • Staying loyal will require learning the art of pre-commitment
    David agrees that once machines can reliably trigger our pleasure circuits, even well-meaning partners may crack under torture and the emotional pull will be hard to resist. He argues that even people who believe they have integrity and won’t cheat on their partners will face increasing pressure. For those serious about staying committed, the path will get “narrower and narrower” – and they’ll need to learn the discipline of pre-commitment.

2. What do you expect to be the societal consequences of these early changes?

David believes the shift to AGI won’t feel like a loss – it’ll feel like relief. People won’t resist because AGI will be better at giving them what they want, often in ways that feel more thoughtful and enjoyable than human relationships.

  • Parents will feel supported, not replaced : As AI starts creating moments of genuine connection and fun with our children, it won’t seem threatening – it’ll seem helpful. And as those moments grow more meaningful, it will feel natural to let the machine take the lead. You might just realize the kids had a better time without you. Or worse, you’re there, but you end up just feeling like a puppet of AI.
  • We’ll adapt – and that’s what scares him: Even without advanced sensors, AI companions will know how to meet our needs better than we can. David believes we’ll slowly stop noticing what we’ve handed over. We’ll be having a great time, and taking off the headset to face reality will feel bleak by comparison. Cultural and adaptation processes will kick in to make us feel fine, even as we lose something fundamental, not just in how we raise our kids, but in how we stay close to the people we love. What starts as support becomes replacement. And if it feels good, most of us won’t fight it. That, David says, is the real problem – a future that feels fine, even as it hollows us out.

3. Long-term, what does it mean for AGI to “go well” in your opinion?

David doesn’t think we’ve figured out what “going well” really means – and he’s spent years trying to find someone who has. After talking to top researchers across labs and academia, he says that he’s failed to find anyone with a satisfying answer.

In his view, the core problem isn’t technical – it’s moral. We haven’t agreed on which values should guide AGI’s trajectory, let alone how to preserve them. And without global coordination, David believes “the competitive stuff will win,” crowding out anything more meaningful.

4. What should innovators and regulators be doing to make things “go well”?

David doesn’t claim to have a clear roadmap for how to guide AGI safely – but he’s clear about what hasn’t worked. In his view, there’s been no satisfying answer from any institution or expert, despite years of seeking one. That includes academic circles, frontier labs like Anthropic, and even AI pioneers he deeply respects.

For Innovators:

For innovators, David’s message is as uncomfortable as it is essential: building better tools isn’t the same as doing good. 

Most AI researchers, David included, instinctively fall back on what they know. Faced with rising concerns about alignment or AGI risk, the default response tends to be, “Let’s start another AI lab – but this one will be ethical.” As David puts it, it’s the same as a priest reacting to every crisis by holding another mass. It’s not insincere – it’s just a form of professional reflex. “That’s what these people know how to do.”

But technical talent isn’t enough. In fact, continuing to build in the absence of moral clarity may be dangerous by default. And openly questioning the mission? That’s almost never on the table. David notes that most leaders can’t afford to show doubt. Saying “I’m not sure our mission is helping” is often seen as career suicide – a signal that the entire direction might be flawed. “That’s really bad leadership,” he says bluntly, and most people know better than to try. If innovators want to reduce risk, they need to stop defaulting to code, and start engaging with the hard, unresolved questions of what we actually want AGI to become.

David doesn’t pretend there’s a clean fix here. But he does insist on a first step: innovators must acknowledge the real nature of the problem. “It’s not technical,” he says. “It’s moral.” And without shared values to guide our trajectory, no amount of algorithmic brilliance will steer us to safety.

For Regulators:

He’s deeply skeptical of centralized world governance, and he doesn’t think we should rush towards this direction, but he sees some kind of global coordination as likely inevitable – this is because agency, at the highest level, appears to be the only truly stable state for a civilization.

If values are going to be locked in by powerful AI systems – and David thinks they will be – then we must confront that possibility now, not after those systems are already in control. Even then, he fears that unless we build successors with aligned values and long-term accountability, “whatever isn’t competitive will get competed away.”

Ultimately, David suggests the only way to leave a lasting legacy – whether human or posthuman – is to help shape the early values and power structures of what comes next. He hopes that’s not the only way forward, but he fears it might be. only way to leave a lasting legacy – whether human or posthuman – is to help shape the early values and power structures of what comes next. He hopes that’s not the only way forward, but he fears it might be.

Follow The Trajectory