State of Nature, or Solidarity? The Birth of Artificial General Intelligence

The advancement of the sciences will, at some point in the decades of centuries ahead, lead humanity to a place where we can expand our cognition and sentience, and potentially build artificial general intelligences vastly beyond our own.

Exactly how that transition occurs is the predominant question I pose in many of the essays here, and I believe it to be the most important question humanity can possibly answer – as it’s consequences will be drastic and far-reaching in ways beyond our ability to imagine or understand.

The scenarios can roughly be broken into two categories:

AI from Solidarity

I have long argued that some degree of human solidarity will be required in order to avoid conflict, and to birth an AGI with the highest likelihood of doing good.

In a lengthy essay titled The SDGs of Strong AI (a play on the Sustainable Development Goals of the United Nations), I argue that the United Nations should eventually add two more Sustainable Development Goals to their existing list of 17:

While determining a common direction and shared aim for the future of intelligence (and enforcing transparency on the developments of AI and neurotech) will by no means guarantee a positive outcome, it would likely stave off war, and potentially give humanity the best shot of creating an AGI that would – at least for some initial period of time – act benevolently in bringing about some agreed-upon human objectives.

Strong-AI-and-Cognitive-Enhancement@2x
Source: Emerj Artificial Intelligence Research “The SDGs of Strong AI”

For this reason, AGI born of solidarity will have its supporters. All in all, I’m currently one of them, though by no means am I an optimist that such solidarity could be achieved.

In order to make such a species-level solidarity occur, “state of nature AGI” would have to become a “common enemy” of all nations, a reason to band together. This seems unlikely unless out species is jarred by an actual threat, or actual harm – a Pearl Harbor-like catalyzing event involving AGI or cognitive enhancement that prods humanity to their backs up against the same wall and face the danger on the same time.

As nearly all historical “common enemy” scenarios teach us (from Mao and Republic of China defending against Japan, to the various Gallic tribes in their fights with Rome, etc), the underlying wrestling for power doesn’t stop, but simply pauses, or changes shape.

The conatus still runs the show in each nation and in each individual. Such a situation is tentative at best, but tentative solidarity might prove better that no solidarity at all in the face of creative diety-level intelligence.

AGI from the State of Nature

If there is no human solidarity, then AGIs would be constructed competitively. Each nation, or band of allied nations, creating their own cognitive enhancements and new permutations of human cognitive enhancement – and each nation building artificial general intelligences in new and novel ways.

I’ve argued vehemently that such a state of nature intelligence trajectory will lead to conflict. Wildly divergent intelligences, with wildly different goals – and ways of valuing things, of communicating, of sensing and of reasoning – can’t possibly be expected to “get along.”

The result would be, well, what the state of nature is: War.

Huge de Garis, an impressive 30 years ago, posed that the great artificial general intelligence conflict would occur between those who want humans to remain dominant, and those who believe AGI should be built. I suspect it’s more likely to be between competing nations with their own cognitive enhancements and their own respective artificial intelligences.

Whether we like it or not, artificial general intelligence may be brought about by the same rude fecundity, the brute competition, that forged hominids from rodents.

An intelligence vastly beyond human intelligence will likely be able to conceive of vastly new ways to relate, to use limited resources, to deal with other intelligence. This will likely be far outside the limitations of our present notions of “strategy” or “competition” or “cooperation”, but will be infinitely more nuanced. I’m still somewhat convinced, however, that divergent intelligences are still more likely to compete with one another than they are to cooperate.

Where This Leaves Us

“To whom should we give the kingdom?” asked Alexander’s aide, as he lay dying at age 32.

“To the strongest” he replied, then passed away.

Alexander’s last words may be the golden rule of power and intelligence no matter what we try to do about it. Nature may more or less make competition inevitable.

It’s possible that the strongest possible AGI, the AGI most capable of absorbing the galaxy, of conceiving of higher goods, or unlocking the secrets of nature, of potentially escaping this universe and it’s seemingly inevitable heat death – would be best forged on same gruesome battlefield that all other intelligence was formed in. While the transition would be horrible for humanity, conflict may be aggregately best for creating whatever it is that will wake up the universe.

It would be childishly anthropomorphic to claim with certainty that AGI that was “friendly” to humans would inherently be best for the universe. It would also be childish to suspect that any man-made AGI would remain “friendly” for anything longer than a nanosecond (see: Arguments Against Inevitable Machine Benevolence). That said, if the future utilitarian calculus can’t be computed, then preferring an AGI that might treat humans a bit better, all things considered, might be as good an attempt as we’ve got at continuing the intelligence trajectory.

This theory may also be wrong. The fact of the matter is that there’s no way to tell which scenario will have the best long-term utilitarian impact.

While de Garis argued that humanity will be split among those who do, and those who do not want to create AGI, I suspect the camps are likely to also break along a different axis.

  • Those who believe that human solidarity must be achieved, and that peace and shared aims should be the only goal.
  • Those who believe that solidarity across nations and political systems is impossible, and that only the strong will be safe.

The latter is almost certainly the perspective of the world’s most powerful nations when it comes to lethal autonomous weapons. Without a significant change, it’ll almost certainly be their same position on AGI.

By default, the state of nature wins. Maybe it always will.

Time will tell.

 

Header image credit: Frontispiece of Thomas Hobbes’ Leviathan, by Abraham Bosse