The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Brad Carson, who served as a U.S. Congressman and as Under Secretary of the Army, later he served as the Acting Under Secretary of Defense for Personnel & Readiness, and now serves as President of Americans for Responsible Innovation (ARI). Brad brings a rare blend of experience from both the defense establishment and the policy world – experience that shapes his views on how the United States should approach the coming era of artificial general intelligence.
You might expect someone with deep roots in national security to see AGI through a purely competitive lens – a technological arms race to be won at any cost. Instead, Brad’s focus is on responsibility, understanding, and restraint: how to sustain moral clarity in a domain where the stakes will rival those of nuclear deterrence, and where fear itself could become a strategic liability.
In this episode, Brad lays out:
This is the fifth installment of our US-China AGI Relations series – where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
I hope you enjoy this episode with Brad:
Subscribe for the latest episodes of The Trajectory:
Brad begins by explaining that he prefers to think of the United States’ relationship with China in terms of competition rather than a race. A race, he says, creates the wrong incentives and undermines safety.
He explains that leaders must not ignore what AGI actually means. Policymakers, he says, need to clearly understand the concept of artificial general intelligence and its implications.
Brad notes that competition and dialogue are not mutually exclusive. Even while competing, he believes there should be active communication between both governments and the major AI labs.
Finally, he emphasizes that technology is not deterministic – humanity still decides what is built and why. He points to human cloning and germline editing as examples of restraint, reminding leaders that “we made the tiger,” and can just as well tame it. These are political choices, he says, and wisdom demands cooperation with other nations to ensure AI advances serve human flourishing rather than escalate toward catastrophe.
Brad warns of several worst-case scenarios that could emerge from the way nations approach artificial intelligence. He began by noting that one possible failure mode resembles the nuclear competition of the past. In his view, treating AI as a race could make it easier for conflict or cyber activity to arise between great powers.
He also points to the space race as another cautionary example. The U.S. won that race by sending a man to the Moon, but the focus on a single achievement left the country lagging afterward, this history shows how short-term goals can lead to long-term setbacks.
Brad goes on to say that today’s AI boom could fall into the same trap. He described how heavy investment in data centers and chips is already driving much of the economy, but warned that a sudden shift in technology could cause major disruption.
He concludes that nations should be careful about racing toward goals without asking what they truly accomplish. Even the space race, he reflected, produced pride and progress but also decades of stagnation. The same mistake, he implied, could be repeated with AI if the purpose of the race is not clearly understood.
Brad describes a best-case scenario defined by communication, realism, and cooperation between major powers. Even adversaries, he argues, should hold regular conversations – between labs, executives, and national leaders – to understand each other’s progress and prevent dangerous assumptions.
He emphasizes that talking early is the best protection against crisis. If discussions begin now – before AGI is fully realized – both sides can develop mutual understanding and avoid panic reactions later. He notes that even in a competition, dialogue between scientists and governments should continue.
He adds that realistic cooperation could take many forms. For example, he says nations should discuss AI’s use in nuclear command and control, missile defense, or cyber operations – all areas that could become dangerously unstable if left unregulated.
Brad also describes a realistic best-case outcome for the United States: maintaining a lead in frontier chips and models while using that advantage to strengthen its economy. He says that rather than chasing permanent dominance, America should use its current lead to rebuild manufacturing and technological infrastructure at home.
Brad emphasizes that policymakers need to deepen their understanding of China’s real intentions and avoid repeating Cold War mistakes. He argues that U.S. leaders often project motives without studying internal debates or bureaucratic constraints. To build smarter policy, he says, Washington must invest in real expertise-reading Chinese sources, speaking with Chinese academics, and learning how Chinese military and political institutions actually function.
He stresses that meaningful dialogue is the next critical step. For Brad, both bilateral talks and Track II dialogues are necessary to prevent misunderstanding as AI becomes central to national power. He suggests that leaders and military counterparts should begin regular conversations focused specifically on AI’s strategic uses, its risks, and how each side conceives of safety and deterrence.
For innovators, Brad underscores the importance of securing America’s technological advantage-particularly in chips. He believes maintaining a clear lead in compute is essential to national security and economic stability. U.S. companies, he says, should treat chips not as ordinary export goods but as strategic assets that require oversight.
I’m grateful to have had Brad as part of this series. His background in both Congress and the Department of Defense gives him a rare vantage point – one that bridges political leadership, military experience, and the emerging realities of AGI.
What stood out most was his insistence on understanding before reacting. At a time when “race” language dominates the conversation, Brad reminds us that awareness and dialogue are still our best tools for avoiding the worst outcomes. His view that competition and communication can coexist – even with a rival power – is a perspective worth amplifying.
As this series continues, I’ll be speaking with more voices from defense, diplomacy, and research – to map out not just the risks of this new technological era, but the slivers of cooperation that might still keep us from repeating history’s mistakes.
…
My dearest hope continues to be for solidarity globally around what I’ve called the two great questions – questions that will shape whether advanced intelligence is a curse or a catalyst. None of this is easy, but if we can shine enough light on the hard incentives driving U.S.–China AGI relations, we may give ourselves a chance to coordinate, to hold on to what matters, and to steward the flame of life itself forward.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…