Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
This is the first episode in our new Early Experience of AGI series – where we explore the subtle signs of AGI’s rise long before its dominance is declared.
This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of Gradual Disempowerment, and former researcher at Anthropic.
I’ve never spoken with a guest who is so frank about how even their own relationships – including with spouse and children – are potentially at risk of being completely replaced by AI-generated experiences and personalities. Few people look into the void as honestly as David, and that makes this episode especially disturbing and important to listen to.
In this episode, David talks through what it feels like to gradually hand over agency to machines. From creeping reliance on AI tools to the erosion of human input in high-stakes decisions, we explore the earliest moments when AGI begins to take the wheel – quietly, and maybe permanently.
I hope you find this conversation with David to be as candid and unsettling as I did:
Questions from this episode:
David believes the shift to AGI won’t feel like a loss – it’ll feel like relief. People won’t resist because AGI will be better at giving them what they want, often in ways that feel more thoughtful and enjoyable than human relationships.
David doesn’t think we’ve figured out what “going well” really means – and he’s spent years trying to find someone who has. After talking to top researchers across labs and academia, he says that he’s failed to find anyone with a satisfying answer.
In his view, the core problem isn’t technical – it’s moral. We haven’t agreed on which values should guide AGI’s trajectory, let alone how to preserve them. And without global coordination, David believes “the competitive stuff will win,” crowding out anything more meaningful.
David doesn’t claim to have a clear roadmap for how to guide AGI safely – but he’s clear about what hasn’t worked. In his view, there’s been no satisfying answer from any institution or expert, despite years of seeking one. That includes academic circles, frontier labs like Anthropic, and even AI pioneers he deeply respects.
For innovators, David’s message is as uncomfortable as it is essential: building better tools isn’t the same as doing good.
Most AI researchers, David included, instinctively fall back on what they know. Faced with rising concerns about alignment or AGI risk, the default response tends to be, “Let’s start another AI lab – but this one will be ethical.” As David puts it, it’s the same as a priest reacting to every crisis by holding another mass. It’s not insincere – it’s just a form of professional reflex. “That’s what these people know how to do.”
But technical talent isn’t enough. In fact, continuing to build in the absence of moral clarity may be dangerous by default. And openly questioning the mission? That’s almost never on the table. David notes that most leaders can’t afford to show doubt. Saying “I’m not sure our mission is helping” is often seen as career suicide – a signal that the entire direction might be flawed. “That’s really bad leadership,” he says bluntly, and most people know better than to try. If innovators want to reduce risk, they need to stop defaulting to code, and start engaging with the hard, unresolved questions of what we actually want AGI to become.
David doesn’t pretend there’s a clean fix here. But he does insist on a first step: innovators must acknowledge the real nature of the problem. “It’s not technical,” he says. “It’s moral.” And without shared values to guide our trajectory, no amount of algorithmic brilliance will steer us to safety.
He’s deeply skeptical of centralized world governance, and he doesn’t think we should rush towards this direction, but he sees some kind of global coordination as likely inevitable – this is because agency, at the highest level, appears to be the only truly stable state for a civilization.
If values are going to be locked in by powerful AI systems – and David thinks they will be – then we must confront that possibility now, not after those systems are already in control. Even then, he fears that unless we build successors with aligned values and long-term accountability, “whatever isn’t competitive will get competed away.”
Ultimately, David suggests the only way to leave a lasting legacy – whether human or posthuman – is to help shape the early values and power structures of what comes next. He hopes that’s not the only way forward, but he fears it might be. only way to leave a lasting legacy – whether human or posthuman – is to help shape the early values and power structures of what comes next. He hopes that’s not the only way forward, but he fears it might be.
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…