I’m very much of the belief that a brute arms race is the wrong way to arrive at as positive AGI outcome (i.e. a Worthy Successor AGI, or the survival of humanity).
Civilization as we know it today is a rich mix of competition and cooperation, where different arenas allow for our incentives to run amuck (marrying who we like, divorcing who we like, believing in whatever gods we like, competing ardently in business and politics), while also binding them so as not to do more harm than good in a greater sense (laws against theft, murder, political bribery, or businesses polluting the environment).
Some degree of similar coordination, or arenas in an international sense, would be required to calibrate towards a net beneficial future with AGI (I don’t know exactly what this would look like, but I’ve given it some thought in other essays).
Some people have argued that it would take an AI-caused disaster to get humans to take AGI risk seriously, and come together as a species to determine what kind of future we want to collectively work towards.
But not all disasters would be survivable, and not all disasters would encourage intergovernmental coordination.
Types of AI Disasters
Along with historical and hypothetical examples, I’ll use a handful of simple graphics to explain the core difference between disasters that unite humanity, and disasters that accelerate conflict and arms races.
As we head in, let’s define a handful of terms:
AI / AGI disaster: An incident involving significant loss of life or economic damage facilitated by advanced AI. These incidents might range from minor (such as a cyberattack that shuts down electricity and disrupts hospitals and stock markets) to absolutely existential (AGI changing the composition of earth’s atmosphere and wiping out most earth-life). These disasters might be deliberate or accidental, and might occur due to the intention of humans or due to the self-determined goal of autonomous AI.
Dividing disaster: An AI disaster that encourages human groups (especially great powers) towards greater conflict and an AGI arms race.
Uniting disaster: An AI disaster that encourages human groups (especially great powers) towards international governance and coordination around AI and AGI.
Great powers: We might think of the USA, Russia, China, Germany, Japan, and the UK as great powers based on their respective GDPs and military might – but at the time of this writing (2024), we’re more likely to see all US-aligned nations as one great power, with China/Russia-aligned nations as another great power. By either definition (powerful nations, or clusters of aligned and powerful nations), great powers should be a useful…
First of all, there are some AI or AGI disasters so damaging that humanity (and maybe all of earth-life) simply can’t recover from it.
Secondly, there are many kinds of AI or AGI disasters that would more likely move humanity towards an accelerated arms race (see the red scenarios above).
There is a relatively small fraction of all possible AI disasters that would actually encourage global coordination.
The key factor, as far as I can tell, lies with the perception of the “great powers.”
We might think of the USA, Russia, China, Germany, Japan, and the UK as great powers based on their respective GDPs and military might – but at the time of this writing (2024), we’re more likely to see all US-aligned nations as one great power, with China/Russia-aligned nations as another great power.
The Importance of Great Powers Perception
The theory I’m putting forth could be summed up this way:
In terms of propensity to unite or divide humanity (i.e. bring about global governance or an AGI arms race, respectively), it doesn’t matter if a great power, small human group (small nation, terrorist group, etc), or AI “causes” the disaster.
An AI disaster is “dividing” if one great power interprets the event as an attack (overt or covert) from another great power. A perceived threat from one great power to another divides.
An AI disaster is “uniting” if all great powers interpret the event as an attack from a lesser group. A perceived shared threat to all great powers united.
More or Less Likely AI Disaster Scenarios to Lead to Global AGI Coordination
AI disaster scenarios likely to lead to global AGI coordination:
“Rogue AI” Scenario: A strong AI “goes rogue,” takes over a manufacturing facility in Wyoming, and begins manufacturing things that humans can’t understand. The AI takes over the machinery in the plant in order to kill or kick out the human workers – and is only shut down when the US military bombs the facility and initiates a regional internet and electricity shut-down.
“Rogue Terrorist” Scenario: AI has become so powerful that an advanced terrorist organization is able to use AI to take over many of the drones and self-driving cars in a city, directing them to crash into pedestrians or government buildings
“Alien Invader” Scenario: Through satellite imagery and strange noise, heard from miles away, it becomes evident that in an abandoned location in the mountains, AIs are building some kind of computing base underground. No one knows what the AIs are creating, but they have shot down the drones that have been sent to investigate their underground base. All local nations claim unanimously to see this activity as a rogue AI threat, not a human initiative. This would be something like Independence Day (the 1996 film), in AGI form.
AI disaster scenarios likely to lead to conflict / an arms race: I’ve ordered these scenarios roughly from most
“Pearl Harbor” Scenario: The United States (or any great power) is directly attacked head-on in kinetic and cyber war by another great power, in a blatant act of war. Thanks to lots of immediate international intervention and cries from the populations of both countries involved, somehow the situation doesn’t escalate to nuclear war. One side believes they’ve “made their point” or “righted a wrong,” and the other party is left with a burning recognition of their AI weaponry inadequacies.
“Supposed Proxy War” Scenario: A small nation or military group in Africa attacks multiple Chinese copper mining operations. These attacks involve precise cyber-hacks, AI-replicated social engineering, and drone operations that completely wipe out the machinery at the mine, and cause dozens of injuries to the confused and fleeing workers. China refuses to believe that the African military group could have wielded such advanced technology alone, and that if they could, it would have been directed and driven by US-aligned interests. The US and its allies totally deny involvement, but China is convinced that strong AI is being used in an act of war against its interests.
“Convenient Accident” Scenario: The French military is working with Dassault Aviation on a major national project to automate the production and operation of fighter planes and related weaponry. The project wasn’t announced publicly but is widely speculated as the firm clearly leverages more compute power than ever, opening new data centers under mysterious guises. One day the AI systems under development “go rogue” and deliberately deceive humans, firing weapons, and destroying the entire facility. The accident is far too convenient for France’s (and the broader Wester) allies, and Russia was suspected immediately. It can’t be determined conclusively if Russia caused the accident, but it is widely suspected.
The key commonality among the division scenarios is that when they happen, the great powers are likely to look at each other, point at the problem, and ask “How do we deal with this?!”
The key commonality among the uniting scenarios is that when they happen, the great powers are likely to look at each other, point at their adversaries, and say “How could you do this?!”
The sweet spot for an AI disaster lies in a clear shared problem.
A common enemy.
Concluding Thoughts – A Common Enemy
As it was:
Question: When did the ancient Greeks stop incessantly fighting amongst themselves?
Answer: When the Persian masts were on the horizon.
So it still is:
Question: When will humanity attempt to get on the same page about how to create and direct the trajectory of posthuman intelligence / AGI?
Answer: When AGI makes itself known clearly as an agentic force beyond our control which might overtake us entirely.
While some of us may be able to look squarely at the trends of AI development and – I don’t think it’s realistic to expect humanity broadly to tune into awareness of AGI’s importance until it somewhat quite literally smacks them in the face.
If a disaster is to happen, we must hope first that it isn’t big enough to end humanity, and second, that it is clearly not perceived as an act of aggression (overt or covert) from one great power to another. What this means for policy I don’t know.
There is no way to ensure that such a disaster won’t happen. One almost certainly will, if we avoid nuclear war enough for AGI to continue developing. There’s also no way to ensure that when one does occur, it’ll be of the (a) non-fatal, (b) non-arms-race-encouraging variety. But I suspect anything we can do to increase the likelihood of (a) and (b), the better off we’ll be.
NOTE: I sincerely hope that something other than an AI disaster is capable of getting humanity on the same page about (a) avoiding an AGI arms race, and (b) developing a shared vision for the future trajectory of intelligence. Exactly what those non-disaster catalysts could be will have to be the topic of another later article.
The substrate monopoly hypothesis: In the remaining part of the 21st century, all competition between the world’s most powerful nations or organizations (whether economic competition, political competition, or military conflict)…
The Shamefully Poor State of Human Wellbeing “For when he saw that almost all things necessarily required for subsistence, and which may render life comfortable, are already prepared to their…
“Political singularity” is a hypothetical event where global and national politics centers almost exclusively on issues related to post-human intelligence and power. Most of the attention will likely center around…
In history, we are reminded that all is in flux. America is likened to Rome. China today is likened to the Tang Dynasty many centuries ago. Europe was pure barbarity,…
Once the grand vision of artificial general intelligence (or the general importance of AI Power in the years ahead) is seen or, there are only two main responses from extremely…
In 1985, Microsoft was an underdog “good guy” – standing against big evil corps like IBM. In 2003, Google was an underdog “good guy” in a world of big evil…