The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Joel Predd, a senior engineer at the RAND Corporation and co-author of RAND’s work on “five hard national security problems from AGI”. In this conversation, Joel lays out a sober frame for leaders: treat AGI as technically credible but deeply uncertain; assume it will be transformational if it arrives; and recognize that the pace of progress is outstripping our capacity for governance.
Joel argues trust is far too low for sweeping bargains between the United States and China today, but windows of opportunity may open after catalyzing events. The prudent move now is to buy time through limited controls and to prepare to seize those windows with practical contingency planning.
In this episode, Joel explains:
This is the fourth installment of our “US-China AGI Relations” series, where we explore pathways to achieve international AGI cooperation while avoiding conflicts and arms races.ina AGI Relations” series, where we explore pathways to achieve international AGI cooperation while avoiding conflicts and arms races.
I hope you enjoy this episode with Joel:
Subscribe for the latest episodes of The Trajectory:
Joel begins by defining what’s at stake. For him, AGI means human-level or even superhuman cognition across a wide range of tasks – particularly the ones that matter economically and militarily. That’s the baseline idea leaders need to have in mind.
He stresses that such a capability should be treated as technically credible, even if the parameters, timelines, or paradigm in which it arrives are deeply uncertain. Leaders should hold both ideas together: credible enough to prepare for, uncertain enough not to anchor on a single forecast.
If AGI arrives, Joel says, it won’t be just another technology. It will be utterly transformational – “more like a different species than another app.” That means immense opportunities, unprecedented risks, and one hard truth for both the U.S. and China: there is no plan today, and the pace of progress is already outstripping our capacity for governance.
Joel warns of several worst-case scenarios, drawing on the framework from the paper he co-authored with Jim Mitri at RAND – “Five Hard National Security Problems from AGI”. He says those problems help capture the kinds of dangers that could emerge in a U.S.-China AGI competition.
Joel’s first concern is AGI empowering non-state actors. He warns that advanced systems could act as a “malicious mentor,” handing dangerous capabilities to people who would otherwise never be able to build weapons of mass destruction. This, he stresses, is a risk that neither the U.S. nor China has any interest in seeing realized.
He then points to the danger of AGI in state hands. Acting as an “innovation agent,” it could generate weapons no one has yet imagined – cyber, biological, or material breakthroughs that upend the military balance almost overnight, much like the splitting of the atom. Beyond that, Joel highlights the possibility of systemic shifts in power: AGI accelerating economic, scientific, and military strength for one side while leaving the other to languish.
Finally, Joel underscores instability on the path itself. Fear of losing ground could drive the U.S. or China to act first, even to the point of preventive war. Making matters worse, attribution could blur in a crisis: if critical infrastructure is hit, leaders might not know whether it was a hostile state attack or an AI system spinning out of control. That uncertainty, Joel cautions, is destabilizing in its own right.
For Joel, the best outcome is about preserving decision space. The danger, he says, is locking into a future that leaves no room to adapt. His somewhat positive vision is simple: keep options open and continue to buy time.
He links this vision to preparedness. The pace of progress is, in his words, “stunning,” while governments are slow to anticipate change. Joel notes that strategies will mostly emerge in response to events, which is why his team at RAND is working on simulations of plausible scenarios – to anticipate what choices leaders might face, what information they would need, and what actions should be ready in advance.
At the same time, he describes a second line of work: thinking strategically about the futures we should avoid and the futures we might want to pursue. Those goals, Joel says, will look different to different actors, including the U.S. and China. He admits that talking about positive futures can sound “naive,” but insists it’s still necessary. For him, the value lies not in predicting a single outcome, but in forcing policymakers to grapple with different possibilities – to sketch what a tolerable future might look like, however provisional, and to measure present choices against that backdrop.
Joel sees one overriding requirement: the United States must get ahead of AGI through preparedness, not paralysis.
For policymakers, he lays out several “no regret” steps. The first is basic situational awareness: knowing where the models are, where the compute is, and what the frontier systems are actually capable of. Entering the next crisis without that visibility would be dangerous. Second, he argues for a more dynamic relationship between the U.S. government and frontier labs – one that can evolve from simple awareness to deeper cooperation around security and safety as circumstances change. “It’s unwise to imagine one relationship to rule them all,” Joel notes. The third priority is adoption. Beyond evaluations, he insists that models must be put in the hands of people who will actually use them – in health care, cyber, defense – so policymakers understand their tactical implications, not just their benchmark scores. Finally, he points to contingency planning. Policymakers need a dedicated function for running scenarios, gaming crises, and ensuring options exist before events force bad choices.
For innovators, Joel notes that innovation should not be seen as limited to the frontier labs. The broader business community and civil society, he argues, have an “enormous role” in both capturing opportunities and hardening society against risks. The partnership between government and labs may get the most attention, but he believes we are only at the “early days” of imagining the higher-level public-private alignment that will ultimately be required. Industry leaders should not wait for government to act. Their choices in adoption, security, and experimentation will shape how resilient the economy – and society – will be in the transition.
Finally, Joel underscores that the relationship between the government and the private sector cannot be static. Objectives will change: sometimes the priority is securing the frontier against theft, sometimes it is ensuring safe deployment, sometimes it is accelerating progress. What matters is creating a structure that can adapt as circumstances shift. In his view, foresight and flexibility are the only safeguards against surprise.
What stood out to me in this conversation with Joel was how consistently his answers circled back to the same guiding theme: buy time, stay flexible, and prepare before events force our hand. There are no silver bullets for AGI, but there are no excuses for paralysis either. If policymakers, innovators, and civil society can build the foresight and adaptability Joel calls for, we may yet navigate the turbulence ahead with some room to choose our future.
…
As this series continues, my aim isn’t to offer polished answers or easy narratives. It’s to surface the full range of risks, incentives, and fragile possibilities in plain view – drawing out perspectives from leaders in defense, diplomacy, and industry, and putting them on the table for serious reflection. Only by confronting the complexity head-on do we stand a chance of finding shared ground.
My dearest hope continues to be for solidarity globally around what I’ve called the two great questions – questions that will shape whether advanced intelligence is a curse or a catalyst. None of this is easy, but if we can shine enough light on the hard incentives driving U.S.–China AGI relations, we may give ourselves a chance to coordinate, to hold on to what matters, and to steward the flame of life itself forward.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…