Once the grand vision of artificial general intelligence (or the general importance of AI Power in the years ahead) is seen or, there are only two main responses from extremely ambitious persons:
Response 1: Build Power Quietly
“I will pursue technological predominance in this technology… I will quietly amass power and own this new dynamic that will determine the fate of the world.”
Only big tech (Google, FB, AMZ) and superpowers (USA, China) can really have this response.
Response 2: The Banner of Virtue
“I will, like Robespierre, feign virtue and clammor against the selfishness of the powerful, calling myself a representative of the good. In so doing, I will strive to gain control over these future technologies through benevolence.”
Everyone else fits in this camp. That means all of Europe, all smaller AGI startups (GoodAI, etc), all smaller new policy organizations (ALLAI, etc).
Once the ambitious see the big game ahead – they take one of the two paths – most opt for the latter, as it is the only path accessible to them.
They start the company with a mission statement of goodness and of virtue, and at night they tell themselves:
“To them out there, I will be a paragon of virtue, this will all be for some abstract good… and by this means, and through this veneer, I will have my power… I will have my influence…”
Most ambitious people believe themselves to be among the first to have this idea.
“Ah yes” they think, “an avenue to win the great power of strong AI… and compete with big tech and global powers… my banner will be that of the GOOD guy.”
This is the face they make upon making this realization and believing it to be a unique idea:
Naive and foolish persons observing the founding of these organizations believe this veneer to be true.
I am not calling either approach 1 or 2 “wrong.” They are simply logical responses of the conatus (https://en.wikipedia.org/wiki/Conatus), any human in the shoes of Musk would take his approach, anyone in the shoes of Larry Page would take his. I can’t blame anyone for acting in their self-interest.
OpenAI was Elon Musk’s attempt at the latter – but we have so many other examples. ALLAI, Partnership on AI… and basically everything Europe/UK have done around AI have been of this “valiant and virtuous champion in the face of the bad and selfish powerful” mold.
OpenAI could only hold the veneer for so long until it inevitably gave way.
The world is messy, our own interests will only be represented if we ourselves ensure them – and managing perception (“Open” AI… “Good” AI… etc) is among the most important tools in the toolbelt of power.
Names like “GoodAI” and BenevolentAI and AI Foundation proliferate.
15 years ago where was “Goodsoft inc.” for SaaS?
Because now, the small know that their position against the strong, their means to influence, is denying the selfishness of their aims – the fact that their aims spawn from the same self-serving humans drives that the powerful have.
Eventually, AI ethics organizations – including groups within the United Nations and other IGOs, as well as Deepmind and others – will slander the “goodness” and “openness” and “virtue” of their rivals – just like Robespierre did.
They will battle over the perception of goodness, so that they – and not their rivals, will get their hands on power.
Those who run AI-related “good” organizations (again, they aren’t bad people, I can’t blame them for the approach) will inevitably state the following:
“No no, for us, it really is about being selfless. OpenAI and DeepMind are sell-outs, but WE are truly saints.”
What they are really saying is:
“See my halo, break the halo of my rivals, and grant ME the power, so that I can ossify it and lock it in.”
The optimists among my friends may disagree that all is conatus.
I suspect that the friends that I have who are part of these kinds of “Good AI” organizations will also resent this idea, even though it isn’t an attack on them.
We need to be aware of the vicious selfishness inherent to ourselves and others, in all groups and purported causes, and somehow leverage that harsh reality to motivate solidarity or else, in the face of super-powerful AI and brain-machine interface tech.
Straight up – solidarity or else. Expecting AI to “work itself out” is dangerous, and expecting saintly “Good AI” groups to relieve our troubles is extremely dangerous.
“Good” AI groups, like any group made of humans (or animals), they should be seen as another voice to recon with and consider, not as actually selfless or saintly, or better – in any moral sense – than any other group.
Header image credit: Wikipedia