A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Alexander the Great had enough time in his 32 years to accomplish many objectives. Defeating the Persian empire, conquering the Hellenic world and large swaths of India, and so on. Succession planning, as it turns out, hadn’t been much a priority for the greatest Macedonian, and as he lie on his deathbed, he was asked:
“To whom shall we give the kingdom?”
In what is sometimes reported to have been his last words, he replied:
“To the strongest.”
The exact last words of Alexander can’t be known for sure, and there are a number of potentially varying accounts.
His choice of wording isn’t necessarily impressive or poetic – but while of Alexander’s last words over five years ago, I was struck by the kind of worldview that he seems to imply. It is almost as though he foresaw the fact that his generals would battle after his death in order to carve up his empire, and that there was nothing he could do about it. It seems like a kind of determinism. “To hell with who I say should rule – whoever is strongest will rule.”
I choose to use this particular quote as the keystone for this small essay because the idea of the world’s greatest empire of the time being handed over to whoever had the strength and will to wield it is a disturbing but potentially prophetic (and in no way new) idea about the nature of man, and the state of nature of which man is part.
Are we relegated to this same condition?
Is the future of power and intelligence bound to the same aimless “survival of the fittest” dynamic?
In my conversations and talks about AGI, I speak frequently of the “Last Words of Alexander” as a reference point to the state of nature that we live in, to the brutal wrestling for power that happens all around us, and to the possible inevitability that this power may always fall to “the strongest” – and that this brute struggle may not only be what births post-human intelligence, but may also dictate the competitive dynamics between different post-human intelligences.
Below I’ll present a (rather pessimistic) hypothesis about the nature of power, and discuss how humanity might grapple with the dynamics of power as we work out way towards post-human intelligence.
If this hypothesis is correct, it bodes poorly for our transition to post-human intelligence. This would seem to lead humanity to an “arms race” scenario of artificial general intelligence and/or cognitive enhancement – where strong countries compete in order to construct the most powerful intelligence in order to avoid being subject to the might of other powerful nations.
If we hope for peace and concord between nations, between people, and between humanity and future post-human intelligence, this Hobbesian “state of nature” scenario seems to be worth avoiding.
I’ll try to explore this idea through some hypothetical “paths to safety” which humanity might pursue:
The hypothesis presented above is admittedly pessimistic. While I don’t dogmatically ascribe to this degree of pessimism around our condition,
It’s bothersome to consider that the brutal struggle for survival will always continue (as per the example above with the singleton) – and that no matter what humanity achieves, or no matter what high forms intelligence might take – multiple entities living in the same physical space will inherently have dynamics of competition – and that even collaborative periods must involve hedging against the risk of eventual conflict.
Then again maybe it’s childish to think otherwise, maybe it’s the dream of an infant to always have a parent to “making things right” and to keep us safe – and maybe there is nothing more natural as a mature adult than coming to grips with the fact that the strong survive, and that those who are smart and strong enough to obtain power make themselves more capable of being safe, and ensuring that their ends and objectives are met.
Maybe nature hurls its way forward by creating various forms and letting them die off or survive in the blind entropy of the universe, and that this is the only version of “progress” that exists.
Like the Lord of the Flies, only… forever… and through myriad permutations of intelligence. Maybe that’s a pill we have to swallow. I hope not, though.
Header image credit: Wikipedia
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
In an interview with Wired about his work building a brain at Google, Ray Kurzweil was asked about his thoughts on Steve Jobs’ notion of death as a natural part…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…