You Must Be an Idiot

In 1985, Microsoft was an underdog “good guy” – standing against big evil corps like IBM.

In 2003, Google was an underdog “good guy” in a world of big evil corps like Microsoft.

In 2016, OpenAI was an underdog “good guy” in a world of big evil corps like Google.

In 2023, StabilityAI is an underdog “good guy” in a world of big evil corps like OpenAI.

And you believe in the “selflessness” and moral virtue of the underdog?

You must be an idiot.

Every Robespierre becomes a Napoleon if you give him the scepter.

Of those who rail against tyranny are many who long to be tyrants.

In innovation (building technological capability) and in regulation (governance), the actions of the actors will be predictably, boringly self-interested:

AI Regulation as a Means to Power
Source: AI Regulation as a Means to Power

It has never been otherwise.

Humanity forgets this because we are – by and large – idiots.

In any instance where we are surprised when an entity or person acts in their own self-interest, be are being inexcusably naive.

Whether violence or peace, whether competition or cooperation – self-interest should be understood to be the eternal default for all conscious and unconscious living things.

The conatus is king and lord, and always has been.

There are two conditions under which you support the “underdog is morally good, and incumbent is morally bad” argument:

  1. You are a Snake: Because you believe that you would be benefitted by said policies. You consciously or subconsciously allow yourself to believe that the person pushing the narrative that helps your self-interest to be labeled as morally “good” (and you probably call those who push initiatives against your self interest as morally “bad”).
  2. You are Naive / an Idiot: Because you believe that you would be benefitted by said policies. You consciously or subconsciously allow yourself to believe that the person pushing the narrative that helps your self-interest to be labeled as morally “good” (and you probably call those who push initiatives against your self interest as morally “bad”).

As AI power has become blatantly evident, and the political and ambition singularity approach, we have new categories of “good guys,” on the side of “selflessness and virtue” – against the “selfish, evil and powerful”:

There are oodles of other “virtuous good guy” / “for ‘The People™'” voices out there. I’m really not aiming to pick on Ben or Emad here, per say.

In fact:

  • I am among Goertzel’s biggest fans (Cosmist Manifesto is among my favorite books), and have interviewed the man many times – and plan to do so in the future as well. I think his vision of a decentralized AGI is a worthy idea, and may just be the best path forward.
  • I also commend the efforts of Emad as an important counterbalance to Big Tech’s power, and probably a net good.
  • I respect them both for their achievements and their approach.

But I am not dumb enough to think either of them are “selfless,” that they deserve the scepter, or that with supreme power they would somehow serve my interests any better than Altman or Zuck or anyone else.

I have read enough history to see that the argument “Hey… look, I’m doing it ‘for the people!’, I’m not one of the bad guys!” is literally one of the best pathways to power.

OpenAI literally did exactly this (“we’re ‘Open’ and morally good, unlike big tech!”), then they pivoted to being “closed” as soon as it behooved them to do so (almost as if the conatus is king and lord… weird, right?).

Montaigne was right – Nayef Al-Rodhan is right – virtue doesn’t exist – but all is not lost.

Governance generally – and AGI governance (if such a thing is possible) requires us to embrace the following:

  • Every person and entity will act in it’s own self-interest. There are no saints.
  • Absolute power does not “corrupt” anything – it simply allows free reign to the inherently amoral (not immoral) and selfish drives that are the prime mover of all living things.
  • Given the inherent self-interest of all living entities and organizations, the following two things can both be true:
    • It is good to have multiple agents vying for power, serving as checks and balances to one another, and potentially preventing any one entity to tyrannically take over. Such a balance may be in the best interest of all the varied groups involved (i.e. the Peace of Westphalia and the formation of the United Nations a
    • None of those individual agents vying for power is “selfless”, and none would do anything but what behooves their own interests if you gave them the scepter.

But power has always required the management of perception. Overtly or covertly power requires conveying the following, constantly:

“Listen, people, give me power and I will serve you, I promise… but that person will definitely not serve you if you give them power.”

That’s how power works. Perception management and intellectual dishonesty is required. So be it.

When it comes to approaches to AGI and AI governance, “let a thousand flowers bloom,” so to speak. Maybe we’ll all be better off with all those checks and balances and ideas floating around at the same time.

But if you don’t see the purely self-interested conatus seething beneath all the virtuous preening that happens as all these parties jockey for power… if you see the party that behooves you as “saintly” and those against your interests as “selfish/bad”… I’m afraid to inform you:

You must be an idiot.

 

Header image credit: Wikipedia