The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Episode 5 of The Trajectory is the Executive Director and Co-founder of Center for AI Safety, Dan Hendrycks.
The interview is our fifth and final installment in The Trajectory’s first series AGI Destinations, where we explore future scenarios of man and machine – which futures we should move towards, and how.
I hope you enjoy this conversation with Dan Hendrycks:
In this article, I’ll explore Hendrycks’ position on the Intelligence Trajectory Political Matrix (ITPM), and highlight some of the more interesting takeaways from the episode itself.
The entire AGI Destinations series of interviews hinges in part on discussions around the ITPM (full article here) – a tool for roughly mapping possible future involving man, cyborgs, and AI. The ITPM isn’t intended to be a permanent label for a thinker, but rather, a reference point to the future they’re currently aiming to move towards.
Hendrycks sits somewhere between controlled and collaborative. On the way to ascension, he sits on the more conservative side of the progression.
Dan was very clear that he doesn’t have a specific long-term aim for precisely where he thinks the post-human trajectory should go – he stated that this should be a democratic and collaborative process.
For now, he merely believes that landing initially at the spot that he mentioned in the Intelligence Trajectory Political Matrix (above) would most likely give humanity a moment for pause to ensure that we’re moving forward in a direction where we have more stakeholders and buy-in rather than a process driven by a small handful of AGI lab leaders deciding the fate of the rest of humanity.
Dan was reticent to speak about any kind of objective for an AGI that will eternally babysit humanity or for any kind of AGI or an extreme blast-off event of post-human intelligence. Again, during the interview, Hendrycks seemed ardently concerned with getting to a checkpoint where the process of where the intelligence trajectory could be decided and influenced by dynamics other than an arms race.
Hendrycks paints a stark picture of AGI’s likely impact on humanity in his paper bluntly titled Natural Selection Favors AIs over Humans. He envisions a future where these artificial intelligences don’t just coexist with humans – they actively expand their reach, potentially at our expense (in much the same way that humans reach to achieve their own goals and survival at the expense of species around us).
During our interview Hendrycks also delves into the private sphere, exploring how AGI might reshape our most intimate connections. Picture a world where advanced chatbots become companions for the lonely, blurring the lines between artificial and genuine relationships. It’s a scenario that’s both fascinating and unsettling which he believes might lead to an increasingly weak relative position for humanity in the face of a more powerful species capable of manipulating human drives at scale.
Hendrycks believes the international coordination is the most important outcome for humanity to achieve in the coming two years in order to avoid conflict with AGI or conflict over AGI’s creation. He concedes that it may be the case that national AI governance in the USA might help to kickstart such international coordination – and possibly encourage China to be willing to adhere to a handful of crucial standards as well.
In the even more near-term, Hendrycks hopes that one of the major AGI labs would offer itself up to an international body to be governed by a broader democratic process – and so set a moral and social standard to ensure that this technology is developed in a safer way.
All this said – Hendrycks is no blind optimist that all of this coordination is possible, or that it would work out. He states many times in the interview a fear of the US or Chinese military commandeering AGI technology and kicking off a more militaristic arms race – and he admits frankly that centralizing AGI research (in a kind of CERN for AI) might make such commandeering much easier.
Despite the obvious risks, he believes that a strong attempt at coordination is more likely to bring about better beneficial outcomes for humanity and for possible post-human intelligences than today’s brute arms race.
…
I’m grateful to have had Hendrycks as the final in this series – and I hope dearly that I’ve done my job in asking the hard moral questions about posthuman directions. This is what The Trajectory is about.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…