Altman is Everyman – AGI Requires Coordination, Not Saintly Leaders

If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem.

Here’s the TL;DR on this article:

  • It would be ridiculous to expect to find “selfless” AGI lab leaders (or politicians, or people in general). We should accept human nature (as we do in law/sport/etc.) and assume amoral selfishness from individuals and organizations.
  • It is not in the individual self-interest of AGI leaders to slow down/ avoid risks to humanity, because (as of now) if they don’t race to build the AGI that may kill everyone, someone else will build it.
  • Aligning humans to net-positive outcomes is not a game of picking saints (i.e. denying human nature), it’s a game of channeling self-interest (i.e. accepting/using human nature) through international AGI governance.

In a simple visual format, I’m arguing that Selecting Saints is an abysmal approach to positive AGI outcomes:

Remember what the man himself said. Sam is wise enough to play the game, but also frank enough to lay it out:

Incentive, Not Character, Drives the AGI Arms Races

Today’s leaders or employees of AGI labs have two choices:

Option 1: Build AGI first:

  • Potentially acquire a great deal of power for a short time 
  • Probably also be destroyed by AGI itself.

2. Don’t build AGI first:

  • Watch someone else acquire the power instead of you
  • Then probably be destroyed by AGI itself.

Because these are the only alternatives – even decent and well-intended people will recklessly drive towards AGI – regardless of risk.

This has led to an obvious arms race on the way to general intelligence. Not only between companies (Meta/Google/MSFT have all overtly stated that they’re driving towards AGI), but also between nations (China has made it clear that AGI is their goal).

“AI will probably destroy us all, but in the meantime there will be some great companies.” Sam Altman

“With artificial intelligence we are summoning the demon. You know the story of the guy with the pentagram and the holy water… like… yeah (sarcastically) you’re pretty sure you can control the demon” Elon Musk

“There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom-style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.” Dario Amodei

All three of these individuals are still actively, ravenously driving towards AGI predominance. Given the alternative (being destroyed by someone else’s AGI), I don’t blame them. If given the same two choices, essentially everyone would do the same things they’re doing now.

Ask people how to solve this arms race scenario we find ourselves in, and you’ll typically hear:

“We need to get rid of [insert name of founder, i.e. Altman], he’s a selfish psycho!”

The implicit assumption here is that there is a class of selfless person who can reliably take the helm of power and do so purely in the service of abstract “humanity,” without their own self-interest. 

But the truth is this:

We can never reliably count on any one or any organization to act in any way other than its own self-interest.

Humans are not immoral, they are amoral

The conatus is king. 

We are not locked in an AGI arms race because “bad guys” are running AGI firms. We are locked into an AGI arms race because the self-interest of all the players in the game dictate prioritizing a race over ensuring positive outcomes from AGI.

In a scenario with no governance at all – replace Altman with Musk, with Ilya, with Habassis – it makes no difference. Incentives will win out.

Aligning humans to net-positive outcomes is not a game of picking saints (i.e. denying human nature), it’s a game of channeling self-interest (i.e. accepting/using human nature).

Only Consequences Keep People Aligned with the Interests of Others

The great game of power has always involved optimizing across two factors:

  1. Actual influence, control, capability
  2. Perceived benevolence among those whom you must sway or control

“I’m the good guy, I am not selfish. I don’t stand with the corrupted powerful people, I stand for you, everyman!” – is not a statement of intention, it is a management of perception. For Altman, for Robespierre, and for anyone else who has ever used it, it is “2” (above), and nothing more.

The Mandate of Heaven has always been exactly this – and the Mandate of Heaven for AGI will play out by exactly these same terms.

Mandate of Heaven Quadrant - Daniel Faggella

And we play this tribal “angels vs. devils” game because we think it behooves us. 

We pick the actors who help further our own personal self-interest, and call them the angels – and we pick the actors who hinder our selfish benefit and call them the devils.

We call the angels “selfless” and the devils “selfish.” We tell ourselves that these labels are objective when in actuality they are mere extensions of our own self-interest.

So the next time a new AI lab makes the same promises as OpenAI when they were founded, or Anthropic when they were founded, we’ll see a lot of the same naive response.

We’ll say: “Ah, here are the good guys, we should give them the power instead of the bad guys!”

Even if we use this kind of moralizing to selfishly forward our own champion (Chinese forwarding their AGI labs as “good” and Western firms as “bad”, some new AI dev seeing Anthropic’s purported moral commitments and calling them “good” and OpenAI “bad”, etc) – doesn’t serve us (humanity).

The real “bad guy” is the belief in “bad guys” itself.

(i.e. the belief that any person or organization will act in any way other than its own self-interest).

The real “good guy” is a manful acceptance of amoral human nature, and taking that nature into account as we build systems to bind and curtail incentives.

There is no end to the arms race for individual AGI labs (OpenAI, Meta, etc) or for nations (primarily: USA and China) unless there is international governance and agreement in place to channel incentives towards a shared good (the benefit of humanity and post-human life), not merely a state of nature.

Governance and coordination done well isn’t about limiting actions and possibilities, it’s about restraining or channeling human self-interest in such a way that allows new possibilities for more generally positive outcomes.

Society puts the right to use violence into a set of (hopefully) objective and (relatively) clear laws – so that people might enjoy a greater range of action and greater ability to coordinate and achieve net positive outcomes. 

“Consider four marvels of our age — science, democracy, the justice system and fair markets. In each case the participants (scientists, litigants, politicians and capitalists) are driven by selfish goals. That won’t change; not till we redefine human nature. But for years, rules have been fine-tuned in each of these fields of endeavor, to reduce cheating and let quality or truth win much of the time. By harnessing human competitiveness, instead of suppressing it, these “accountability arenas” nourished much of our unprecedented wealth and freedom.” (from Disputation Arenas, by David Brin)

Very few people who advocate against AGI governance also advocate against property rights, or public plumbing, or the police. 

What AGI Governance Looks Like

In my opinion, AGI governance should meet the following two simple criteria:

  • To deter deadly conflict between AGIs, or nations / organizations.

You may disagree with those two points above, and have a different mandate for what AGI should achieve. And you almost certainly disagree with me about the details of what a “preferable” future is. 

That’s fine! 

If you believe anything other than a brute economic and military arms race is ideal – then you’re with me in believing that governance should exist.

What specific form should AGI governance take?

This is the question we should be unpacking now. 

We don’t need a one-world tyrannical global government to manage chemical weapons risk, or to enforce international trade rules – or to make Wifi standards the same in both Boston and Seoul. 

I suspect it should be a combination of hard and soft governance – with varying degrees of consequence and severity as the risks increase (see the ideas of Bengio, OECD AI Principles, etc).

Does governance work?

Much of the time, yes – even if it’s not perfect.

So long as it channels the self-interested motives of individuals to a better overall aim to others (a group, a town, a country, a global community), then indeed it serves it’s purpose – and makes civilization possible.

In fact, you, dear reader, are almost certainly a fan of much of the basic coordination (locally, nationally, and internationally) that we see around us.

  • Rules of the Road: Having speed limits, having a Stop sign mean “stop”, having “right of way” rules, etc – all lead not only to safer roads, but to a more efficient process of getting from point A to B. We’re now permitted to drive in controlled and safe ways, without requiring everyone to thrash into the road violently just to make an inch of headway (if you’ve driven in nations without rules of the road, you know that this is like).
  • Property Rights: Those who rail against AGI governance do so from within their homes or apartments, with their cars parked outside. Doubtless they’re grateful that property rights exist – lest someone simply come and take what they have. They’re grateful that such rules are enforceable, because it allows us to focus on productive work – and not merely on protecting our families and possessions from someone else with more guns or more muscle.
  • Sports: If basketball or baseball or boxing had no rules at all, they would simply become war. With rules – and with objective referees upholding them (and fans who demand to see them enforced evenly) – we can harness self-interest, reduce cheating, and develop the art of the sport itself.

Coordination and enforcement of this kind isn’t perfect (people drive recklessly, steal, and cheat in sports – and some rules go too far and become unproductive) – but they allow us as national and international communities to have a wider range of action because we have structured something other than the state of nature.

Would AGI governance work?

In order to be interested in AGI governance doesn’t mean one must believe AGI governance would definitely work. 

It simply means that you believe that some attempt at coordination is more likely to bring about beneficial outcomes for humanity and post-human intelligences than a pure state of nature arms race.

Concluding Note: Altman is Everyman

Trust no one with the scepter. 

Trust no one to behave “selflessly.” 

“Down with the big guy, I’m truly good, give me the power, I’m here to serve you!”

Remember how OpenAI was founded?

Remember how Anthropic was founded?

Some of you got excited when OpenAI was founded because someone would finally be sticking up for the everyman. Then as OpenAI’s natural self-interest became self-evident, you heard Musk’s claims about Grok being “for the people” and – like little children – took the bait again.

The phrase “Don’t hate the player, hate the game” doesn’t take this idea far enough.

Don’t hate the game or the player.

Deal with the game (amoral human nature) as reality – and – just as humanity has done imperfectly (but sufficiently) in sports, in sciences, in democracy – create another, better game around it. One that gives us a wider range of positive outcomes for humanity and posthuman intelligence.

Aligning humans to net-positive outcomes is not a game of picking saints (i.e. denying human nature). It’s a game of channeling self-interest, that is, accepting/using human nature).

No one – not you, not your tribe, not your favorite leader – is “sticking up for the everyman.” 

While many are less smart and ambitious than Altman – no one is radically, appreciably more “selfless” than Altman.

Altman is everyman.

Until we start from that base, there is no progress for encouraging beneficial AGI outcomes – and we may meet our demise passing the scepter of AGI power between a dozen human hands who will put their interests first, instead of getting our act together on governance.