AGI risk seems lower than it is because the people who know that AGI risk exists are almost all incentivized against talking about it openly.
Broadly, there are two groups of people not talking about AGI risk:
People who know AGI risk is real (i.e. AGI lab leaders)
People who know nothing about AGI risk (i.e. politicians, citizens)
To catalyze wide scale discourse about AGI risk, I argue that the following two strategies are strong candidates:
[Bottom-Up] Get Citizens Concerned: Find ways to meld AGI risk narratives into legacy media and more understandable political talking points which are already in the Overton window. This work is bound to be messy, and to ultimately soil the core of the message with left-right political gunk, but (barring an AI disaster) its likely the only way that AGI risk catches on with enough of a core base of citizens. As I explain in the video essay, the citizens are the lynchpin to getting everyone (those who don’t know, and those who know) to discuss AGI risk more frankly.
[Top-Down] Get a Losing AGI Lab Leader to “Flip”: Those closest to achieving AGI are not going to flip and start talking about AGI risk, the rewards are too great. But those who are losing the race (and don’t want to live to see their rivals achieve the final flex before them) might be able to feign virtue by claiming “Now I see AGI is dangerous, I’m a concerned expert and this needs to be regulated!” They can cloak themselves in pretended virtue while also preventing a rival from crossing the finish line first.
Here’s the full video essay:
If you come up with more accurate content for the cells below, or new categories of stakeholders that aren’t considered here, feel free to ping me on X.
Here’s the Why No One Talks About AGI Risk graphic that I feature in the video itself:
I didn’t have time for a longer write-up on this chart, but the video essay handles the major points well, and (I hope) supports the validity of rallying effort around the “Top-Down” and “Bottoms-Up” strategies for catalyzing AGI risk discourse more broadly.
Below is a transcript breakdown (thanks Otter) of my video essay.
Transcript Breakdown
Why No One Talks About AGI Risk
Daniel Faggella introduces the topic of why artificial general intelligence (AGI) risk is not widely discussed.
He outlines the two main groups: mainstream media, politicians, and regular citizens versus AGI lab leaders, big tech, and the tech bro sphere.
The focus is on why these groups do not discuss AGI risk and what it would take to change that.
Daniel emphasizes the importance of understanding the reasons behind the lack of discussion on AGI risk.
AGI Lab Leaders
AGI lab leaders face a choice between dying by their own AGI or by a rival’s AGI.
They risk being seen as evil if they openly discuss the risks of AGI.
The concept of the “AGI Mandate of Heaven” is introduced, where raw capability and perceived benevolence are crucial.
There is no international coordination to address AGI risks, making it a challenging issue to tackle.
AGI Lab Employees
Some AGI lab employees may never have faced significant challenges and thus underestimate the risks.
They face the same two choices as lab leaders: perishing at their own lab or being overtaken by a rival.
Employment concerns prevent many from openly discussing AGI risks.
Daniel mentions that he has not interviewed anyone from DeepMind or OpenAI who has spoken frankly about AGI risk.
Big Tech Leaders
Big tech leaders, like Satya Nadella, must maintain a perception of benevolence to avoid being seen as evil.
Financial incentives prevent them from openly discussing AGI risks.
The need to sell enterprise solutions prevents them from discussing AGI risk, as it could scare away potential buyers.
Daniel emphasizes that he is not blaming these leaders but explaining their moral self-interest.
Tech Bro Ecosystem
The tech bro ecosystem includes a mix of individuals who understand the risks and those who do not.
Governance is often seen as effeminate, preventing some from discussing AGI risks.
Some individuals retweet without understanding the implications, while others play a political game.
The desire to fit in on the internet prevents many from discussing AGI risks.
Mainstream Media and Politicians
Mainstream media and politicians do not discuss AGI risk because their readership and voting base do not care.
Regular citizens lack the ability to extrapolate trends, making it difficult for them to understand the risks.
Daniel emphasizes that ignorance is a significant reason why AGI risk is not discussed.
He suggests that a groundswell of public awareness could change this.
Strategies to Flip the Overton Window on AGI Risk
Daniel outlines two main strategies to bring AGI risk into the Overton window: a bottom-up approach and a top-down approach.
The bottom-up approach involves getting a small group of citizens and media platforms to care about AGI risk.
The top-down approach involves getting AGI lab leaders who are losing the race to speak out about the risks.
Daniel emphasizes the challenges of both approaches but believes they are worth pursuing.
Challenges of Flipping Media and Politicians
Media and politicians must be convinced to care about AGI risk, which is challenging.
The message may need to be bastardized to gain attention, which could dilute the core issue.
Daniel suggests that politicians and media may need to be convinced to care about AGI risk by appealing to their self-interest.
He believes that a groundswell of public awareness could help flip media and politicians.
The Role of Losing AGI Labs in Addressing Risk
Daniel suggests that AGI labs that are losing the race may be more likely to speak out about the risks.
These labs may see the benefits of international coordination to prevent their rivals from becoming too powerful.
He mentions Richard Sutton and Jeff Hawkins as potential candidates for this approach.
The strategy involves getting these labs to speak out before their rivals cross the finish line.
Conclusion and Next Steps
Daniel concludes by emphasizing the importance of addressing AGI risk.
He suggests that both bottom-up and top-down strategies are needed to bring AGI risk into the Overton window.
He invites others to share their ideas and strategies for addressing AGI risk.
Daniel plans to explore scenarios other than disaster in a future video.
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Ideals Have Taken Us Here Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?…
Last week, I was fortunate enough to catch up with George Mason University Professor, Doctor Robin Hanson, one of the bloggers I admire most in the realm of intelligence and…
In the coming decades ahead, we’ll likely augment our minds and explore not only a different kind of “human experience”, we’ll likely explore the further reaches of sentience and intelligence…
I don’t watch fiction, and I don’t read fiction, almost as a rule. While I respect it as a medium, and consider it valuable in fleshing out future scenarios that…