The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
USMC Colonel Drew Cukor spent 25 years as decades in uniform and helped spearhead early Department of Defense AI efforts, eventually leading project including the Pentagon’s Project Maven. After government service, he’s led AI initiatives in the private sector, first with JP Morgan and now with TWG Global.
Drew argues that when it comes to the US-China AGI race, the decisive lever isn’t what we block – it’s what we adopt. The nation that most completely fuses people and machines across daily life, industry, and government will set the tempo for everyone else.
In this episode, Drew lays out:
This is the third installment of our “US-China AGI Relations” series, where we explore pathways to achieve international AGI cooperation while avoiding conflicts and arms races.
I hope you enjoy this episode with Drew:
Subscribe for the latest episodes of The Trajectory:
For Drew, the essence of the U.S.-China AI contest is not raw technical breakthroughs but adoption. He frames AI as a civilizational challenge: whichever nation integrates it most fully into daily life, institutions, and industry will emerge as the leader. In his view, adoption at scale reshapes culture, industry, and even military competitiveness – far more than isolated advances in laboratories.
He warns that while the U.S. has the DNA of AI leadership – the best universities, researchers, and compute – it is falling behind in all-in societal adoption. By contrast, China’s system enables rapid, sweeping integration across sectors. The outcome is not just faster financial services or better consumer experiences – it is the creation of an entirely new culture where humans and machines work symbiotically.
Drew emphasizes that this civilizational adoption gap poses a direct competitiveness challenge for the United States. It is not enough to focus solely on export controls, IP protection, or defensive strategies; if American businesses, schools, and government services fail to embrace AI, the U.S. risks watching its leadership drift eastward.
The worst-case scenario for Drew is not simply being out-innovated in labs, but being out-civilized – watching another nation integrate AI so completely into its institutions, economy, and culture that its influence seeps globally. In this world, U.S. banks, supermarkets, and industries would find themselves competing against far stronger AI-empowered counterparts, unable to keep pace.
He points to Jared Diamond’s Guns, Germs, and Steel – a book that traces how technologically advanced civilizations have historically overtaken slower ones. For Drew, AI adoption is today’s version of that dynamic. The nation that evolves fastest into a human-machine symbiosis will shape the future, while laggards risk being “infected” by the advances of others, forced into dependency rather than leadership.
The same risk extends to the military domain. Drew underscores that if one nation fully absorbs AI into logistics, planning, and operations, warfare itself will be reinvented. A military that is wholly AI-empowered would hold a decisive “offset” advantage, similar to the U.S. display of “shock and awe” in the Gulf War – but magnified by AI’s speed, precision, and integration.
For Drew, the worst case is clear: a future where another power builds a civilization more efficient, creative, and militarily capable through AI adoption, leaving the U.S. unable to maintain its global standing or defend its values.
The best-case scenario for Drew is the United States rekindling its bias for action and embracing adoption at scale. He envisions a society where AI saturates every layer of life – from classrooms to banks to city services – and where American strengths in innovation are matched by equal strengths in integration. In this world, speed, accuracy, and delight become the everyday expectation across all services, empowering the U.S. to remain competitive on its own terms.
Education, he argues, is one of the most important battlegrounds. Drew imagines curricula where AI is integrated into learning from the earliest years, so that every student grows up prepared for a lifetime of human-machine collaboration. Without this, he warns, an entire generation could end up “doing AI in the dark,” while other nations build cultural momentum from the ground up.
For Drew, the best-case scenario also requires a regulatory environment that supports adoption rather than stifles it. Instead of rules invented out of fear, he calls for standards that emerge from practice – frameworks grounded in experience, codified over time, and measured consistently across the industry.
Ultimately, Drew’s best-case scenario is not harmony between the U.S. and China, but an America that becomes fully symbiotic with AI, ensuring its own sovereignty and competitiveness by leading with adoption rather than lagging behind.
Drew sees one overriding imperative: the United States must shake off complacency and move from words to adoption.
For policymakers, he argues that regulation should emerge, not be invented. He points to the NIST risk management framework as the right model: best practices discovered in the field, codified only after use, then enforced consistently. What slows America down, he warns, are the “horse whisperers” inside enterprises who over-interpret regulators’ intent, paralyzing adoption through fear. Regulators, he insists, don’t want speculation – they want to see firms follow standards that grow out of practice.
For innovators, Drew calls for courage and risk-taking. He warns that CEOs are likely presiding over the last generation of human-only organizations. Future enterprises, he predicts, will be 30-70 or even 40-60 human-to-machine hybrids. Leaders who hesitate will leave their industries exposed. He points to his own city of Santa Monica, where cutting-edge driverless Waymo cars share the streets with businesses still running on outdated processes – a contrast he sees as emblematic of U.S. stagnation.
Finally, Drew stresses that innovators must break free from complacency. Too many executives, he warns, are waiting for a “magic moment” when AI is handed to them – a passive stance that cedes ground to faster-moving competitors abroad. Private tools already outpace enterprise systems, proving the capacity exists. What’s missing is will. For Drew, the future won’t reward caution. It will reward leaders bold enough to drive adoption before it’s forced on them.
…
I’m grateful to have had Drew bring his perspective to this series. His message is simple but urgent: adoption or irrelevance. He’s not arguing for new slogans, nor for defensive measures alone – he’s calling for America to embrace AI in its schools, its enterprises, and its policies, or risk being surpassed by a civilization that does.
As this series continues, I’ll be drawing out perspectives from leaders across the military, diplomatic, and private sectors. The aim isn’t to flatten the debate into a simple story – it’s to surface the full range of risks and opportunities in plain view. If we want a real shot at steering AGI toward safe and shared futures, that kind of unvarnished clarity is essential.
My dearest hope continues to be solidarity globally around the two great questions, including robust dialogue with the many smart and well-intended people in Chinese and US tech and governmental leadership. Nothing about the situation is easy, but I hope sunlight on the brutal incentives involves allows us to coordinate in order to get a good shake for humans, and to steward the flame of life itself forward.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…