A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
We assume that the tech founders who are building artificial general intelligence (AGI) care about safety. The firms closest to creating AGI sure to talk about it a lot. If not for the safety of others, certainly for their employees.
Even if they were recklessly ambitious and heartless, surely they’d care about their own safety.
Right?
Wrong.
Believing that an ambitious man values safety over pride or power is a total misunderstanding of men in general, and ambitious men in particular.
Eugène Delacroix’s masterpiece The Death of Sardanapalus depicts a probably fictional king of Assyria, in his final hour.
Locked away in his palace, fated to death after losing a war against rebelling factions, Sardanapalus orders a funeral pyre constructed, and piles up his precious metals, royal clothing, and his concubines and eunuchs – and ascending the pyre himself – is burned alive along with it all.
Sardanapalus is painted as a singularly flawed, spoiled, and evil man, but his actions echo the striving of men everywhere: To go out on ones own terms – to carry one’s pride beyond their own death.
This prideful urge has lead men* to some dastardly things:
…but it has other manifestations as well:
For reasons good, bad, or otherwise – for some men, physical death is often preferable to an ego death. Winning the race is worth all the risks that winning entails.
I’m not saying that I like this element of human nature – or that you should like it. It just is.
The Sardanapalus Urge: When faced with being vanquished by a foe, many men would prefer to go out by their own hand, in a blaze of glory that showcases their power and might – and seems to them to turn their defeat into a kind of manful, final victory. An imposition of their own power, not a succumbing to anyone else’s.
If racing towards a flying machine or a scientific discovery is worth risking death – imagine what’s it worth to bring a post-human intelligence to life. The incentives are too great to resist.
Whoever creates AGI will wield massive economic power (a machine that can do anything is pretty valuable), and massive physical power (the military with AGI is almost certainly secured the position of the strongest).
But it seems somewhat obvious that AGI won’t be controllable for long – if it’s controllable at all.
This might involve the machines:
Would these risks halt the founders from progressing with AGI?
Surely the threat of their own deaths, and the end of humanity itself, would deter these ambitious men from putting the petal to the metal on AGI innovation?
Of course not.
AGI founders have the incentive to go petal to the metal. Pride above safety. As it has always been with ambitious men. Call it a vice if you want.
The only true efforts towards “safety” will come from two groups:
Both these groups act by the same force driving today’s AGI company founders: Self-interest.
Incentives rule the world.
From AGI company founders – expect lip service to safety, a safe preening of virtue and benevolence. Expect the delicate tactful art of making one’s power more palatable or less threatening to the powerless. Do not blame them for it. Power is a hard game.
If you were in their shoes, you’d have to do the same. Your own selfishness wouldn’t let you admit it publicly, but you would.
If your pride and power were tied to your technical prowess… if you faced a meaningless existence, born to die (the condition we all share)… and you had the opportunity to birth a deity that might bloom beyond humanity and populate the galaxy… maybe discover an escape from the eventual heat death of the known universe, you would do the same.
If there is a kind of poetic and prideful appeal in dying to create an invention, or defending a nation, or reaching a new mountain top – that appeal is a million times greater for creating a superintelligence. It would be the highest human achievement – arguably the last human achievement.
Even if humanity came to an end from such an event – think of that final poetic victory for the founder – the final act of power, of dealing with their existential condition in a way that is on their terms. Imagine the motive to race to that finish line when the alternative might mean living their final moment being devoured by the AGI of their rival.
Like Sandanapalus, but a thousand times loftier.
If you were in their shoes, you would do the same.
But you’re probably not smart or ambitious enough to be in their shoes (it’s okay, neither am I). Don’t go to bed thinking you are, and thinking that their position comes from mere luck or vice. Those are whining pleas to protect your own pride. You have no time for that, and I have no patience for it.
Remember only this:
Just don’t pretend you’re doing it because you’re selfless.
The Death of Sardanapalus – T.J. Wilkins, 1875
.
Think I would hang my head, ye fools?
Think open I this golden gate?
Conquered, my will could never be,
think ye I would not sneer at fate?.
See you this gilded castle high?
See you these soft and slender maids?
You think the glory I have won,
you’d deign touch with hands or blades?.
My gaze commands the fire and steel.
My port, an emperor’s remains.
My soul does tower over thine,
my will, of this defeat disdains..
Ye call this evil, trifling fools?
Ye call this spite, and weakened will?
Or do thee cry to see now how,
my lofty soul conquers ye still?.
‘Tis I who plunder, capture all.
‘Tis I make pretty heads to roll.
This final, crimson work of art,
makes greater still my lofty soul..
Now look – and tremble at my strength!
Now look – before I’m choked in flame!
By this here mound of gore and gold,
I force the world to know my name!
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…