The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Jack Shanahan, a three-star General and former Director of the Joint AI Center (JAIC) within the US Department of Defense.
Surely, a defense leader like Jack would advocate for racing towards stronger AI as fast as possible to squash China as an adversary, right?
Wrong.
In this episode Jack lays out:
This the first installment of our “US-China AGI Relations” series – where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
I hope you enjoy this episode with Jack:
(Listen to this episode on Apple Podcasts or on Spotify)
Jack Shanahan sees the US-China AI competition as an intense, accelerating dynamic where both countries are racing to maintain technological leadership. While he avoids calling it an “arms race,” he recognizes a deep strategic tension where each side assumes the worst about the other’s intentions.
The competition isn’t just about military power, but also economic strength, with both nations seeking to gain technological advantages. China has explicitly stated its goal to be the world leader in AI by 2030, which creates a pressure-filled environment of constant one-upmanship.
Importantly, Jack doesn’t believe in permanent technological superiority. Instead, he sees countries gaining temporary advantages that can quickly shift. He advocates for avoiding a zero-sum mindset, suggesting that while preparation and vigilance are crucial, dialogue and understanding are equally important.
His perspective is about maintaining a balanced approach – being ready to compete and defend national interests, while simultaneously seeking opportunities for communication and potential cooperation. He wants to prevent an inevitable conflict narrative by keeping communication channels open and recognizing that technological progress isn’t a winner-take-all scenario.
Jack Shanahan warned of several worst-case scenarios, especially surprise tech breakthroughs and conflict triggers. His top concern was a “black swan” event where one side secretly develops advanced AI that the other can’t detect or counter—possibly hidden in a lab or data center, evading traditional intelligence.
One nightmare scenario involved a Taiwan conflict where unknown AI capabilities rapidly disable US responses—crippling infrastructure, disrupting logistics, launching cyber attacks, and causing mass casualties before defenses can react.
He also flagged the threat of AI-enhanced swarming drones, calling them a near-term risk. Cheap, intelligent drones could overwhelm defenses in unpredictable ways, with growing accessibility adding to the danger. Beyond specific threats, Jack feared an escalatory spiral between the US and China—each assuming the worst, racing to outpace the other, and drifting toward a war neither truly intends.
Jack Shanahan saw the best-case scenario as an “uncomfortable tension” where the US and China maintain communication and cooperate despite deep differences. This wouldn’t mean harmony, but a pragmatic dialogue focused on shared interests.
He proposed collaboration on global issues like climate change and pandemics, emphasizing that not all competition is zero-sum. Joint progress could benefit both nations and the world.
Jack stressed the need for ongoing dialogue, even amid high tensions. Both sides should prepare for conflict while working to prevent it through understanding and discussion.
His goal was to avoid an inevitable war mindset, encouraging a future where both nations use AI to raise global living standards while safeguarding national interests—a balance of readiness, communication, and long-term vision.
Jack Shanahan emphasized the critical need to excite and engage the American public about AI technology. He believes the country is currently pessimistic about AI, and policymakers must help people understand how these technologies can benefit them personally and economically.
He recommended holding town hall-style meetings and local conversations to explain AI’s potential, addressing people’s fears about job displacement and economic disruption. Jack stressed involving unions and having honest discussions about how jobs might change, not just promising unrealistic retraining scenarios.
On the policy side, Jack advocated for a balanced approach to regulation – moving rapidly but deliberately, with a “light touch” that prevents unchecked development while not stifling innovation. He suggested maintaining the AI Safety Institute and creating frameworks that allow collaboration between government and tech companies.
Internationally, Jack recommended maintaining open dialogue, particularly between the US and China. He proposed starting conversations about AI governance, focusing on areas of potential cooperation like addressing global challenges, while still preparing for potential competitive scenarios. The goal is to prevent a zero-sum mindset and create opportunities for mutual understanding.
Jack also emphasized the importance of long-term commitment. He wants policymakers to view AI development as a 25-year journey, maintaining continuity across different administrations and creating a national strategy that aligns government, industry, and public interests toward a shared technological future.
…
I’m grateful to have had Jack as episode 1 in this series – and I hope to get more perspective from both US and Chinese military and political thinkers on this important theme in the months ahead. Its heartening to know that.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…