AGI and the Final Kingdom

In history, we are reminded that all is in flux. America is likened to Rome. China today is likened to the Tang Dynasty many centuries ago. Europe was pure barbarity, then Europe was the leader of technology and economic development. Empires and kingdoms rise – and then at some point fall.

“They are the sowers, their sons shall be the reapers, and their sons, in the ordinary course of things, must yield the possession of the harvest to new competitors with keener eyes and stronger frames.” – Ralph Waldo Emerson, Essays Second Series: Manners

We assume that these cycles can and will continue indefinitely – with different ethnic groups and different styles of government gaining and losing prominence.

We forget that the same is so with species.

I present a hypothesis:

The advent of artificial general intelligence will make the leading technological and economic power of the 21st century into the final human kingdom. This kingdom will largely develop, release, and (at least initially) direct the trajectory of post-human intelligence that will ultimately overtake humanity.

This implies that – at some point in AGI development – national interests will commandeer AGI development efforts.

In China, “private sector” companies are already directed by the CCP, and entirely beholden to the government’s whims and goals for expansion or power. In the USA, we can’t long expect the DoD to sit back and watch as an AGI lab is clearly about to birth something more powerful than the US military. When that moment comes, or as we approach it, I suspect the tanks will roll up to OpenAI (or whoever’s) doorstep, and they keys will be handed over.

It is possible that for hundreds of years, different empires and systems of homo sapien governance will rise and fall. I suspect this will not be the case, and a great many AI researchers suspect a Singularity scenario before the 2070s (Dec 2023 note: This article was published in 2019, when 2060 AGI timelines seemed reasonable. Now, nearly everyone’s timelines are shorter).

If these are the Final Kingdoms, here’s what that might mean:

  • Great power countries (US / China) will more and more orient their economic and technological development goals around artificial general intelligence and cognitive enhancement (read: World Eaters) – this will begin with China and their more long-term thinking, top-down government – but the US will follow in AGI efforts once they understand the threat.
  • Less powerful countries (most of the EU, smaller Asian nations) will band together to attempt to (a) lessen the dominance of great power countries, and/or (b) create some kind of global governance and transparency around AGI and neurotech, in order to ensure that the benefits of these technologies can’t be wielded unfairly or destructively by the great powers.
  • The US and China, and the most powerful AI tech firms on earth today (Google, Facebook, Baidu, etc) will ground their long-term strategy in more than economic victory, but in controlling AGI and controlling the virtual experience of human beings (read: Substrate Monopoly).
  • It behooves China to act quickly on AGI development in light of its looming demographic (aging, population decrease) problem – with a laser focus and direct goal (as we’ve already seen in China’s ambitious AI national plans).

I have no crystal ball, and I can’t be sure that the Final Kingdom premonition will hold true, but for not I suspect it will.

If the Final Kingdom is a remote possibility, then I would argue the following:

  • Organizations like the United Nations should address more than AI’s privacy and social justice concerns, but its preeminent place at the center the greatest (and potentially final) power struggle on earth.
  • The West should seriously consider its economic and technological competitive standing in the 21st century – if it wants to see its interests represented in an artificial intelligence-enabled future, or simply to not be beholden to another nation that creates AGI first, then it ought to.
  • More multilateral effort (particularly between the US and China) should be fostered, in an effort to build affinity between businesspeople and AI/neurotechnology researchers, making enemizing and conflict between the two nations more difficult.
  • International bodies should address frankly that there is a grail to which great powers are aspiring, and this should spurn open dialogue as to how we as “team humans” (all homo sapiens) can handle the national incentives to competition, and somehow direct the trajectory of intelligence together, even if ultimately it means that homo sapiens will not hold the highest reigns of power on earth.

My own interviews with Chinese and US AI innovators (both CEOs and researchers) – and within multilateral AI conversations at the United Nations between the US and China – leads me to believe that conflict isn’t necessarily inevitable… though I still consider it likely.

The cosmopolitain spirit is stronger than it ever has been, and we’ll need as much of it as we can in the years ahead. Maybe humans will create a kind of shared Final Kingdom together. Maybe, as Hugo de Garis has suggested, the greatest war ever fought will be fought over this issue of species dominance (read: Political Singularity).

Time will tell.

 

Header image credit: vision.org