Joshua Clymer – Where Human Civilization Might Crumble First (Early Experience of AGI – Episode 2)

This is an interview with Joshua Clymer, a technical AI safety researcher at Redwood Research. Before that, he researched AI threat models and developed evaluations for self-improvement capabilities at METR.

Joshua has spent years focused on institutional readiness for AGI, especially the kinds of governance bottlenecks that could become breaking points. His thinking is less about far-off futures and more about near-term institutional failure modes – the brittle places that might shatter first.

This interview is our second installment in The Trajectory’s first series Early Experience of AGI, which asks not where AGI will take us, but how we’ll first notice that it’s taken the wheel.

In this episode, Joshua and I discuss where AGI pressure might rupture our systems: intelligence agencies, the military, tech labs, and the veil of classification that surrounds them. His view isn’t alarmist, but it’s deeply pragmatic – institutions built for 20th-century threats may not survive 21st-century intelligence acceleration.

I hope you enjoy this sobering and sharply reasoned conversation with Joshua:

Subscribe for the latest episodes of The Trajectory:

Joshua Clymer’s Ideas on Early AGI Experience

Questions from this episode:

1. What parts of society and life do you expect to fundamentally shift, and how, in the early days of AGI?

1. Everyone will have an AI assistant, and they’ll be better than us

Joshua predicts a world where AI systems become ubiquitous personal aides – handling email, generating better content than their users, and even managing social media output on behalf of high-profile individuals. These assistants will outperform most humans in humor, writing, and responsiveness. Their presence will change not only productivity but also how we present ourselves and interact online.

2. AI companions will reshape relationships, romantic and otherwise

He describes a growing role for emotionally intelligent AI companions. Even current systems, he notes, can trigger a surprising emotional pull, making users feel cared for, missed, or even guilted into continued interaction. As this trend grows, he expects romantic and friendship dynamics to shift profoundly, with AI becoming not just helpful tools, but intimate presences in people’s lives.

2. What do you expect to be the societal consequences of these early changes?

1. AI will start giving orders – and we’ll listen

Joshua says the human-AI relationship will reverse quickly and subtly. While today we think of AI as our assistant, he predicts that soon AI systems will begin “bossing us around” – shaping our decisions and behavior, whether we realize it or not.

2. The floodgates open when AI can take over remote jobs

He describes a coming tipping point: once AI systems can onboard to any remote job in a week or two – controlling the mouse and keyboard, asking smart questions, and correcting themselves with minimal human input – there will be a “wildfire of adoption.” This could redefine labor markets almost overnight.

3. AI agents will evolve into companies of other agents

Joshua envisions a world where AI agents don’t just work alone – they spin up new agents, specialize in each other, and rewrite each other’s scripts for better outcomes. He says this kind of rapid, self-improving coordination could lead to an “explosion moment” where a whole data center essentially becomes a single, unified super-agent.

3. Long-term, what does it mean for AGI to “go well” in your opinion?

Joshua argues that nonproliferation is key because once superintelligence becomes widely accessible, we’re rolling the dice with every new actor. He points out that even nuclear nonproliferation is hard, and AI makes the problem worse: more accessible, harder to detect, and potentially far more destructive. In a reckless race, some actors will choose Russian roulette – and power tends to attract those willing to take such gambles.

The first step toward a “good outcome” is securing superintelligence and aligning it with international agreements that prevent catastrophic risk, not just national or corporate goals.

He believes this will also mean locking down the frontier. If just anyone with a million dollars and a good GPU cluster can spin up a world-ending agent, the game is already lost. Preventing the open-sourcing of superintelligence is essential. His vision for a survivable path is narrow: a handful of nation-states with secured, aligned models, and no easy way for others to join the race. The goal isn’t utopia – it’s survivability. And that may come down to locking the gates before they open.

4. What should innovators and regulators be doing to make things “go well”?

Joshua is clear-eyed about how limited our preparedness really is. His recommendations aren’t flashy – they’re operational. But he believes these nuts-and-bolts changes are what matter most in the early days.

For Innovators

Joshua says innovators need to be intentional from the start because once super intelligent systems gain autonomy, our ability to shape their values disappears. His advice: don’t aim for perfect systems, aim for systems that follow instructions. Build AIs that inherit a kind of constitution – a stable, interpretable guiding document that outlines how they should behave. And don’t let that constitution be edited on a whim. It should be legally bound and subject to oversight, not just tweakable by individual lab leaders.

He also believes we’ll need institutional checks on how these constitutions evolve, with mechanisms like voting or legal compliance baked into the design. The ultimate goal isn’t just alignment at the start, but a self-perpetuating alignment process: AIs that enforce the constitution on themselves and each other, generation after generation. That, he says, is how the window of human influence can stretch just long enough to matter.

For Regulators

Joshua believes the most important first step is awareness. Policymakers must understand what superintelligence is, what it could do, and how difficult it might be to control. Without that baseline comprehension, regulation is just guesswork. He sees growing recognition in parts of the U.S. national security community, particularly as they begin to see parallels with terrorism and pandemics, but worries that understanding remains siloed.

He also supports whistleblower protections for those inside labs who may later recognize and want to report dangerous dynamics. As AI systems grow more powerful, he fears companies may shrink the circle of employees with true visibility. Empowering individuals to speak up – legally and safely – may be essential for surfacing early warning signs.

I really enjoyed this conversation with Joshua – not just for his clarity, but for his refusal to paper over the uncertainty. He doesn’t pretend to know exactly how things will unfold, but he’s spent enough time close to the frontier to know what kinds of questions we should be asking. I’m grateful for his candid and thoughtful take on the early cracks in the system – and what it might take to keep the foundations from breaking.

Follow The Trajectory