The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This is an interview with Jeremie and Edouard Harris, Canadian researchers with backgrounds in AI governance and national security consulting, and co-founders of Gladstone AI.
You might expect that experts steeped in dialogue and governance would call for more cooperation between the U.S. and China on AGI.
Instead, their message is starker: assume espionage, expect coercion, and prepare for a future where the Chinese state is already embedded inside Western labs.
In this episode, Jeremie and Edouard lay out:
This is the second installment of our US-China AGI Relations series, where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races.
I hope you enjoy this episode with Jeremie and Edouard:
Subscribe for the latest episodes of The Trajectory:
Jeremie and Edouard see the US-China AGI conflict as a fundamental clash of strategies. From their conversations with national security officials, especially in the intelligence community, the view is consistent: the United States is racing to secure its technological lead, while China is pursuing a strategy of “full spectrum” competition against U.S. interests. In this environment, trusting that agreements would be honored is seen as naïve.
They point to China’s track record as the main reason for this skepticism. The South China Sea’s nine-dashed line – a unilateral claim that directly contradicts international law – is one example. Hong Kong’s crackdown is another. Even during the pandemic, China’s handling of COVID-19 reflected a disregard for global norms. Add to this a major nuclear build-up, with U.S. intelligence projecting China could deploy 1,000 thermonuclear warheads by 2035. New silo fields are already visible by satellite, and large underground facilities outside Beijing suggest preparations for continuity of leadership in extreme scenarios.
Within this tense landscape, Jeremie and Edouard argue that Track II dialogues should still be attempted. As Edouard put it, “You never know when an attitude might shift”. But both emphasized that such engagement cannot substitute for strategy. In their view, the United States must assume China will continue to ignore or violate agreements – and plan accordingly.
Jeremie and Edouard warned of several worst-case scenarios, all centered on China accelerating its AGI development by exploiting Western vulnerabilities. The most dangerous outcome, they said, would be that the “Chinese dragon” already inside Western labs is allowed to grow – with espionage and insider threats eroding America’s margin for safe AGI development.
One scenario they described involves systematic attempts to exfiltrate model weights, already traced back to Chinese advanced persistent threats. Former senior U.S. officials they spoke with were unequivocal: China is targeting frontier labs at scale.
Another scenario concerns the vulnerability of Chinese nationals working in Western labs. Jeremie emphasized that employees are not spies by choice – many feel genuine loyalty to the countries where they live – but obligations such as mandatory check-ins with CCP handlers make them vulnerable to coercion.
A further scenario is China embedding backdoors in Western critical infrastructure. Campaigns like Volt Typhoon have already targeted U.S. power and water systems, providing options to disable equipment at scale. Public cases – such as the former Google engineer convicted of stealing AI trade secrets for Chinese firms – show how insider theft works in practice.
The result is a worst-case scenario that is not a distant future possibility but a present danger: espionage pipelines already operating, systemic coercion in play, and critical infrastructure exposed – with China positioned to accelerate its AGI development through Western assets.
Jeremie and Edouard saw the best-case scenario as narrow areas where “trust but verify” mechanisms could still play a role, even if broad cooperation is unrealistic.
One possible outcome is non-proliferation of compute. Since China has not yet met its domestic chip demand, it could, in theory, agree not to export high-end GPUs internationally. The U.S., in turn, could lead a regime to limit the risk of large-scale clusters falling into the hands of non-state actors. At present, this would cost China little while creating a baseline for control.
Another best-case outcome lies in the development of verification technologies. Edouard pointed to flexible hardware guarantees – known as FlexHEG – as a promising direction. The idea is to build a compute stack that can be verified through encryption and physical hardening. While it may not be possible to implement as originally envisioned, elements of the FlexHEG approach could still prove valuable for bilateral agreements or other security frameworks. Jeremie added that if safeguards like these could be carved out specifically for alignment – for example, add-ons that mitigate loss-of-control scenarios – then co-developing such tools might be one of the few areas where incentives align.
The best-case scenario, then, is not harmony but carving out specific, monitorable lanes where both sides have aligned incentives. Anything that resembles “trust but verify,” Jeremie emphasized, is worth pursuing – even if it only buys time.
Jeremie and Edouard say policymakers and innovators face two urgent priorities: securing the West’s own labs and slowing China’s development.
For policymakers, they stress that current background checks are dangerously insufficient. Investigations often ignore information published in non-English languages, leaving major blind spots. Coordination between federal, state, and private-sector data sources is also weak. According to elite security professionals they’ve spoken with, far more rigorous vetting is possible – tapping overlooked datasets and international records – but it hasn’t yet been implemented.
For innovators, they emphasize the need to fortify AI infrastructure itself. Jeremie warned that if Western labs simply “hit the gas” on research while China is siphoning off intellectual property, they only drag China forward faster. Fortification, therefore, has to come hand in hand with denial and disruption – beginning with securing the clusters themselves before looking at how to slow adversaries.
Finally, they argue that both policymakers and innovators must be prepared to deny, degrade, and disrupt China’s AI progress. Running faster alone is not enough – it only shortens the margin before uncontrollable systems arrive. Buying time requires both fortification and active measures to slow China’s ability to build and scale frontier models. As Edouard summarized, “If you want a shot at alignment, you need margin – and margin means time”
…
I’m grateful to have had Jeremie and Edouard bring their candid perspective to this series. Their view isn’t rosy: espionage is rampant, trust is absent, and slowing China may be the only way to buy time. But it’s precisely because their outlook is stark that it matters.
As we move forward in this series, I’ll be speaking with more thinkers across military, diplomatic, and technical domains. The goal is not to paint a simple picture – it’s to confront the full spectrum of risks and possibilities head-on. If we’re serious about steering AGI toward safe and shared futures, nothing less will do.
My dearest hope continues to be solidarity globally around the two great questions, including robust dialogue with the many smart and well-intended people in Chinese and US tech and governmental leadership. Nothing about the situation is easy, but I hope sunlight on the brutal incentives involves allows us to coordinate in order to get a good shake for humans, and to steward the flame of life itself forward.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…