The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Joining us in our second episode of our series AGI Governance on The Trajectory is Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit at the U.S. Department of Defense.
Mike previously served closely to the government as part of the White House Presidential Innovation Fellow and of the DoD where he accelerated the adoption of cutting-edge commercial technologies for national security, and was previously the CEO of Symantec.
In this episode, Mike highlights the immediate need for international coordination around AGI – with a focus on how to keep the US and its allies strong – while also garnering alignment with China where possible. He lays out a case for rallying initial AI alignment efforts around human rights and common values among allied nations – as well as a proposed approach to bring in academics, business, and other perspectives into the governance process.
I hope you enjoy this conversation with Mike:
Below, I’ll summarize Mike’s main points from each of the four sections of our interview.
10 out of 10 (thought he doesn’t believe AGI will arrive soon).
Mike doesn’t believe that AGI is going to happen soon – he believes it may be 10, 20, or even 30+ years away. Nonetheless, Mike emphasized the urgent need for both domestic and international frameworks to address current risks, such as AI-enabled threats, and to prepare for its transformative potential.
He highlighted the importance of collaboration, even with adversarial nations, to mitigate global challenges and ensure responsible development.
Mike argues that AGI governance and international coordination should address both the existential risks and the strategic race to lead in AI development.
Acknowledging China’s explicit ambition to dominate critical technologies by 2049, Mike emphasizes the importance of the U.S. and allies maintaining technological leadership to preserve shared values like freedom and human rights.
Governance should start with a coalition of aligned nations, leveraging their common values, while pursuing broader agreements on existential risks through minimal consensus with nations like China and Russia. The governance framework should be dynamic, evolve with technological progress, and balance innovation with safeguards against misuse.
Mike proposed a coalition-based approach to AGI governance, focusing on a starting group of allied nations, such as the Five Eyes, to uphold shared values. This coalition would include technologists, academics, and policymakers to ensure the framework evolves with technological advances and practical use cases.
While collaboration with adversaries like China and Russia could address existential threats through minimal agreements, Mike emphasized prioritizing meaningful progress with aligned countries over settling for the “least common denominator” on issues around individual freedoms, privacy, and human rights. He also noted the need to manage transformative technologies in ways that balance societal benefits and mitigate risks without overly centralizing control.
We didn’t get into this question in our interview – as I decided to ask Mike if he thinks its viable that the military will one day commandeer the AGI labs because they seem too powerful to be left in the hands of private companies (you can tune into the interview to get a sense of what Mike thinks about the commandeering scenario).
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…