A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In 1985, Microsoft was an underdog “good guy” – standing against big evil corps like IBM.
In 2003, Google was an underdog “good guy” in a world of big evil corps like Microsoft.
In 2016, OpenAI was an underdog “good guy” in a world of big evil corps like Google.
In 2023, StabilityAI is an underdog “good guy” in a world of big evil corps like OpenAI.
“Power corrupts, and absolute power… is kinda neat” https://t.co/l0IJUuioFP
— Joscha Bach (@Plinz) March 25, 2023
And you believe in the “selflessness” and moral virtue of the underdog?
You must be an idiot.
Every Robespierre becomes a Napoleon if you give him the scepter.
Of those who rail against tyranny are many who long to be tyrants.
In innovation (building technological capability) and in regulation (governance), the actions of the actors will be predictably, boringly self-interested:
It has never been otherwise.
Humanity forgets this because we are – by and large – idiots.
In any instance where we are surprised when an entity or person acts in their own self-interest, be are being inexcusably naive.
Whether violence or peace, whether competition or cooperation – self-interest should be understood to be the eternal default for all conscious and unconscious living things.
The conatus is king and lord, and always has been.
There are two conditions under which you support the “underdog is morally good, and incumbent is morally bad” argument:
As AI power has become blatantly evident, and the political and ambition singularity approach, we have new categories of “good guys,” on the side of “selflessness and virtue” – against the “selfish, evil and powerful”:
Open offer to anyone @OpenAI who actually wants to work on Open AI:
We will match your salary, benefits etc but you can work on any open source AI projects you like, ours or others.
Collaborate, be open and prioritise good outcomes over self interest: https://t.co/BRPspJ2RKB https://t.co/7l5O5mHJEc pic.twitter.com/DFIn1wHmA9
— Emad (@EMostaque) March 16, 2023
2) Those with longer than average memories may recall that a few years ago, OpenAI was funded by Musk and Sam Altman with a narrative of guiding AGI development in an open and beneficial direction… but from the start they were clear about their non-commitment to open source
— Ben Goertzel (@bengoertzel) February 18, 2020
There are oodles of other “virtuous good guy” / “for ‘The People™'” voices out there. I’m really not aiming to pick on Ben or Emad here, per say.
In fact:
But I am not dumb enough to think either of them are “selfless,” that they deserve the scepter, or that with supreme power they would somehow serve my interests any better than Altman or Zuck or anyone else.
I have read enough history to see that the argument “Hey… look, I’m doing it ‘for the people!’, I’m not one of the bad guys!” is literally one of the best pathways to power.
OpenAI literally did exactly this (“we’re ‘Open’ and morally good, unlike big tech!”), then they pivoted to being “closed” as soon as it behooved them to do so (almost as if the conatus is king and lord… weird, right?).
Montaigne was right – Nayef Al-Rodhan is right – virtue doesn’t exist – but all is not lost.
Governance generally – and AGI governance (if such a thing is possible) requires us to embrace the following:
But power has always required the management of perception. Overtly or covertly power requires conveying the following, constantly:
“Listen, people, give me power and I will serve you, I promise… but that person will definitely not serve you if you give them power.”
That’s how power works. Perception management and intellectual dishonesty is required. So be it.
When it comes to approaches to AGI and AI governance, “let a thousand flowers bloom,” so to speak. Maybe we’ll all be better off with all those checks and balances and ideas floating around at the same time.
But if you don’t see the purely self-interested conatus seething beneath all the virtuous preening that happens as all these parties jockey for power… if you see the party that behooves you as “saintly” and those against your interests as “selfish/bad”… I’m afraid to inform you:
You must be an idiot.
Header image credit: Wikipedia
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
In an interview with Wired about his work building a brain at Google, Ray Kurzweil was asked about his thoughts on Steve Jobs’ notion of death as a natural part…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…