Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem.
Here’s the TL;DR on this article:
In a simple visual format, I’m arguing that Selecting Saints is an abysmal approach to positive AGI outcomes:
Remember what the man himself said. Sam is wise enough to play the game, but also frank enough to lay it out:
Today’s leaders or employees of AGI labs have two choices:
Option 1: Build AGI first:
2. Don’t build AGI first:
Because these are the only alternatives – even decent and well-intended people will recklessly drive towards AGI – regardless of risk.
This has led to an obvious arms race on the way to general intelligence. Not only between companies (Meta/Google/MSFT have all overtly stated that they’re driving towards AGI), but also between nations (China has made it clear that AGI is their goal).
“AI will probably destroy us all, but in the meantime there will be some great companies.” – Sam Altman
“With artificial intelligence we are summoning the demon. You know the story of the guy with the pentagram and the holy water… like… yeah (sarcastically) you’re pretty sure you can control the demon” – Elon Musk
“There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom-style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.” – Dario Amodei
All three of these individuals are still actively, ravenously driving towards AGI predominance. Given the alternative (being destroyed by someone else’s AGI), I don’t blame them. If given the same two choices, essentially everyone would do the same things they’re doing now.
Ask people how to solve this arms race scenario we find ourselves in, and you’ll typically hear:
“We need to get rid of [insert name of founder, i.e. Altman], he’s a selfish psycho!”
The implicit assumption here is that there is a class of selfless person who can reliably take the helm of power and do so purely in the service of abstract “humanity,” without their own self-interest.
But the truth is this:
We can never reliably count on any one or any organization to act in any way other than its own self-interest.
Humans are not immoral, they are amoral.
The conatus is king.
We are not locked in an AGI arms race because “bad guys” are running AGI firms. We are locked into an AGI arms race because the self-interest of all the players in the game dictate prioritizing a race over ensuring positive outcomes from AGI.
In a scenario with no governance at all – replace Altman with Musk, with Ilya, with Habassis – it makes no difference. Incentives will win out.
Aligning humans to net-positive outcomes is not a game of picking saints (i.e. denying human nature), it’s a game of channeling self-interest (i.e. accepting/using human nature).
The great game of power has always involved optimizing across two factors:
“I’m the good guy, I am not selfish. I don’t stand with the corrupted powerful people, I stand for you, everyman!” – is not a statement of intention, it is a management of perception. For Altman, for Robespierre, and for anyone else who has ever used it, it is “2” (above), and nothing more.
The Mandate of Heaven has always been exactly this – and the Mandate of Heaven for AGI will play out by exactly these same terms.
And we play this tribal “angels vs. devils” game because we think it behooves us.
We pick the actors who help further our own personal self-interest, and call them the angels – and we pick the actors who hinder our selfish benefit and call them the devils.
We call the angels “selfless” and the devils “selfish.” We tell ourselves that these labels are objective when in actuality they are mere extensions of our own self-interest.
So the next time a new AI lab makes the same promises as OpenAI when they were founded, or Anthropic when they were founded, we’ll see a lot of the same naive response.
We’ll say: “Ah, here are the good guys, we should give them the power instead of the bad guys!”
Even if we use this kind of moralizing to selfishly forward our own champion (Chinese forwarding their AGI labs as “good” and Western firms as “bad”, some new AI dev seeing Anthropic’s purported moral commitments and calling them “good” and OpenAI “bad”, etc) – doesn’t serve us (humanity).
The real “bad guy” is the belief in “bad guys” itself.
(i.e. the belief that any person or organization will act in any way other than its own self-interest).
The real “good guy” is a manful acceptance of amoral human nature, and taking that nature into account as we build systems to bind and curtail incentives.
There is no end to the arms race for individual AGI labs (OpenAI, Meta, etc) or for nations (primarily: USA and China) unless there is international governance and agreement in place to channel incentives towards a shared good (the benefit of humanity and post-human life), not merely a state of nature.
Governance and coordination done well isn’t about limiting actions and possibilities, it’s about restraining or channeling human self-interest in such a way that allows new possibilities for more generally positive outcomes.
Society puts the right to use violence into a set of (hopefully) objective and (relatively) clear laws – so that people might enjoy a greater range of action and greater ability to coordinate and achieve net positive outcomes.
“Consider four marvels of our age — science, democracy, the justice system and fair markets. In each case the participants (scientists, litigants, politicians and capitalists) are driven by selfish goals. That won’t change; not till we redefine human nature. But for years, rules have been fine-tuned in each of these fields of endeavor, to reduce cheating and let quality or truth win much of the time. By harnessing human competitiveness, instead of suppressing it, these “accountability arenas” nourished much of our unprecedented wealth and freedom.” (from Disputation Arenas, by David Brin)
Very few people who advocate against AGI governance also advocate against property rights, or public plumbing, or the police.
In my opinion, AGI governance should meet the following two simple criteria:
You may disagree with those two points above, and have a different mandate for what AGI should achieve. And you almost certainly disagree with me about the details of what a “preferable” future is.
That’s fine!
If you believe anything other than a brute economic and military arms race is ideal – then you’re with me in believing that governance should exist.
What specific form should AGI governance take?
This is the question we should be unpacking now.
We don’t need a one-world tyrannical global government to manage chemical weapons risk, or to enforce international trade rules – or to make Wifi standards the same in both Boston and Seoul.
I suspect it should be a combination of hard and soft governance – with varying degrees of consequence and severity as the risks increase (see the ideas of Bengio, OECD AI Principles, etc).
Does governance work?
Much of the time, yes – even if it’s not perfect.
So long as it channels the self-interested motives of individuals to a better overall aim to others (a group, a town, a country, a global community), then indeed it serves it’s purpose – and makes civilization possible.
In fact, you, dear reader, are almost certainly a fan of much of the basic coordination (locally, nationally, and internationally) that we see around us.
Coordination and enforcement of this kind isn’t perfect (people drive recklessly, steal, and cheat in sports – and some rules go too far and become unproductive) – but they allow us as national and international communities to have a wider range of action because we have structured something other than the state of nature.
Would AGI governance work?
In order to be interested in AGI governance doesn’t mean one must believe AGI governance would definitely work.
It simply means that you believe that some attempt at coordination is more likely to bring about beneficial outcomes for humanity and post-human intelligences than a pure state of nature arms race.
Trust no one with the scepter.
Trust no one to behave “selflessly.”
“Down with the big guy, I’m truly good, give me the power, I’m here to serve you!”
Remember how OpenAI was founded?
Remember how Anthropic was founded?
Some of you got excited when OpenAI was founded because someone would finally be sticking up for the everyman. Then as OpenAI’s natural self-interest became self-evident, you heard Musk’s claims about Grok being “for the people” and – like little children – took the bait again.
The phrase “Don’t hate the player, hate the game” doesn’t take this idea far enough.
Don’t hate the game or the player.
Deal with the game (amoral human nature) as reality – and – just as humanity has done imperfectly (but sufficiently) in sports, in sciences, in democracy – create another, better game around it. One that gives us a wider range of positive outcomes for humanity and posthuman intelligence.
Aligning humans to net-positive outcomes is not a game of picking saints (i.e. denying human nature). It’s a game of channeling self-interest, that is, accepting/using human nature).
No one – not you, not your tribe, not your favorite leader – is “sticking up for the everyman.”
While many are less smart and ambitious than Altman – no one is radically, appreciably more “selfless” than Altman.
Altman is everyman.
Until we start from that base, there is no progress for encouraging beneficial AGI outcomes – and we may meet our demise passing the scepter of AGI power between a dozen human hands who will put their interests first, instead of getting our act together on governance.
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…
Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories: Given…