The Last Words of Alexander – and the Future of Artificial Intelligence

Alexander the Great had enough time in his 32 years to accomplish many objectives. Defeating the Persian empire, conquering the Hellenic world and large swaths of India, and so on. Succession planning, as it turns out, hadn’t been much a priority for the greatest Macedonian, and as he lie on his deathbed, he was asked:

“To whom shall we give the kingdom?”

In what is sometimes reported to have been his last words, he replied:

“To the strongest.”

The exact last words of Alexander can’t be known for sure, and there are a number of potentially varying accounts.

His choice of wording isn’t necessarily impressive or poetic – but while of Alexander’s last words over five years ago, I was struck by the kind of worldview that he seems to imply. It is almost as though he foresaw the fact that his generals would battle after his death in order to carve up his empire, and that there was nothing he could do about it. It seems like a kind of determinism. “To hell with who I say should rule – whoever is strongest will rule.”

I choose to use this particular quote as the keystone for this small essay because the idea of the world’s greatest empire of the time being handed over to whoever had the strength and will to wield it is a disturbing but potentially prophetic (and in no way new) idea about the nature of man, and the state of nature of which man is part.

Are we relegated to this same condition?

Is the future of power and intelligence bound to the same aimless “survival of the fittest” dynamic?

In my conversations and talks about AGI, I speak frequently of the “Last Words of Alexander” as a reference point to the state of nature that we live in, to the brutal wrestling for power that happens all around us, and to the possible inevitability that this power may always fall to “the strongest” – and that this brute struggle may not only be what births post-human intelligence, but may also dictate the competitive dynamics between different post-human intelligences.

Below I’ll present a (rather pessimistic) hypothesis about the nature of power, and discuss how humanity might grapple with the dynamics of power as we work out way towards post-human intelligence.

Hypothesis of the Last Words of Alexander

  • There is no set of future conditions where safety, happiness, and survival are guaranteed – a fight for survival and jockeying for power will always exist. One is safe only insomuch as one is strong.

If this hypothesis is correct, it bodes poorly for our transition to post-human intelligence. This would seem to lead humanity to an “arms race” scenario of artificial general intelligence and/or cognitive enhancement – where strong countries compete in order to construct the most powerful intelligence in order to avoid being subject to the might of other powerful nations.

If we hope for peace and concord between nations, between people, and between humanity and future post-human intelligence, this Hobbesian “state of nature” scenario seems to be worth avoiding.

I’ll try to explore this idea through some hypothetical “paths to safety” which humanity might pursue:

Methods by Which Humanity Might Escape the Hobbesian State of Nature

The hypothesis presented above is admittedly pessimistic. While I don’t dogmatically ascribe to this degree of pessimism around our condition,

  • Method 1 – Upload your mind into a computer a-la San Junipero:
    • This isn’t a full escape, because:
      • Who gets to decide who gets uploaded? Who gets to decide how people are uploaded? Who gets to decide how people’s minds are “stored”, and in what substrates? The party or entity managing the uploads would be in an incredible position of power – potentially controlling the experiences of all uploaded people, and being able to turn them off or alter their condition (for better or worse) immediately. Someone still has power over the substrate (i.e. the ultimate power), and some people will fight vehemently to control that substrate rather than enter a blissful uploaded dream world where they can be lorded over by whoever owns the substrate (as I’ve written about in “Lotus Eaters vs World Eaters“).
  • Method 2 – Harmonic international human collaboration where we globally manage and regulate AI and neurotechnologies to develop as peacefully as possible:
    • This isn’t a full escape, because:
      • Powerful nations will naturally hold the greatest sway over any kind of “global” organization (do your homework about the founding of the United Nations in order to get a sense of the dynamic between the USA, other powerful nations of the time, and the smaller and powerless nations), and jockeying for power during the formation of such a group would be inevitable. In addition, if such a group existed and had representatives voted from the various nations, then the dog-eat-dog world of politics would apply directly the dynamics of such a global organization. Having sway in the organization that manages the creation of post-human intelligence is – well – rather worth fighting for, and we can presume that a great many people would fight ravenously for it.
  • Method 3 – Merge with the superintelligent singleton: A “singleton” (a term from Nick Bostrom, from an essay that I highly recommend) is a hypothical superintelligence which controls the world order and permanently exerts its dominance by preventing internal and external threats to its supremacy. Merging with such an intelligence via some kind of mind upload would seem to be a reasonable way to ensure one’s survival (at least partially).
    • This isn’t a full escape, because:
      • We can’t presume that such a singleton would want anything to do with merging with human beings, nor can we presume that any intelligence beyond humanity would in any way value human life (see my essay on the moral singularity).
      • Even a singleton, with all of its power and might, would have to deal with worldly threats, such as meteors, the expansion of the sun, and the dissipation of the universe into nothingness. Such an entity may need to determine how to escape into other dimensions or find means of survival that we can’t possibly imagine as humans (as fleas cannot imagine the grand global challenges of humanity). This dogged diligence and struggle for survival may not even cease in other dimensions, as a singleton may encounter extraterrestrial life, or other challenges with physics, and may constantly and eternally have the struggle and strategize for its own survival.
  • Method 4 – Seize the means of productions and create a happy and peaceful egalitarian state where vicious competition doesn’t exist and peace and prosperity are had by all.
    • This isn’t a full escape, because:

It’s bothersome to consider that the brutal struggle for survival will always continue (as per the example above with the singleton) – and that no matter what humanity achieves, or no matter what high forms intelligence might take – multiple entities living in the same physical space will inherently have dynamics of competition – and that even collaborative periods must involve hedging against the risk of eventual conflict.

Then again maybe it’s childish to think otherwise, maybe it’s the dream of an infant to always have a parent to “making things right” and to keep us safe – and maybe there is nothing more natural as a mature adult than coming to grips with the fact that the strong survive, and that those who are smart and strong enough to obtain power make themselves more capable of being safe, and ensuring that their ends and objectives are met.

Maybe nature hurls its way forward by creating various forms and letting them die off or survive in the blind entropy of the universe, and that this is the only version of “progress” that exists.

Like the Lord of the Flies, only… forever… and through myriad permutations of intelligence. Maybe that’s a pill we have to swallow. I hope not, though.

 

Header image credit: Wikipedia