If you were a child at any point in the last 50 years, you’re probably familiar with the story.
Sam I Am tries to get a protagonist (let’s call him “Grouchy”) to try green eggs and ham, and their comical dialogue continues (somehow) for 72 pages.
<Sam I Am> Do you like green eggs and ham?
<Grouchy> I do not like them, Sam-I-Am. I do not like green eggs and ham.
<Sam I Am> Would you like them here or there?
<Grouchy> I would not like them here or there. I would not like them anywhere. I do not like green eggs and ham. I do not like them, Sam-I-Am.
If you’re a CEO and your CFO tells you that your cash will run out in 12 days, denial is not an appropriate response.
If you’re a military general and someone tells you that the enemy is encamped across the river and will soon invade, denial is not an appropriate response.
We can all agree that denial will not solve your problems. If you have a version of the world you wish to create, or values you wish to uphold, putting your head in the sand about facts and trends is a poor way to achieve your aims.
“No way! Even if someone was 100x more capable than a regular human after brain augmentation, they’d always respect other humans as equals! Always!”
“That sounds horrible! It’s stupid that you even think this is possible!”
“Ridiculous! No one in their right mind wants a ‘perfect’ AI relationship! They want imperfection, and to be loved by someone REAL!”
It is not a refutation of the core argument (that the human condition is on track to change radically, and we need to take those changes seriously). It is a petulant feeling – manifested in knee-jerked, pouting words.
Like the Grouchy character in Dr. Seuss’s work, these objections are Counterargument at best, and Name-Calling at worst (see Graham’s Hierarchy of Disagreement).
Ideas that violate our “sacred” ideas (about how life should be, about the love we have for a spouse or child, about the nature of the human condition) make us act in a very different way than the general or the CEO in my examples above. They make us petulant. Stomp our feet. Cross our arms.
To a much lesser extend, we’ve all seen this happen over the last 25 years with tech adoption:
In 2005 your father still says he’ll never buy things online, and by 2015 he does all Xmas and grocery shopping on Amazon and nowhere else.
Your neighbor talks about how stupid it is to ride with a stranger in an Uber, and then ends up using the service all the time (along with AirBnb) as they become more popular.
Your luddite cousin who swore about the evils and distraction of social media now has a YouTube channel about cooking and is trying to turn it into a side hustle.
Someone who ridiculed you for online dating in 2012 got divorced and found their new partner through Tinder.
Like Grouchy himself, they tried Green Eggs and Ham, and they liked it.
<Grouchy> They are so good, so good, you see! So I will eat them in a box. And I will eat them with a fox. And I will eat them in a house. And I will eat them with a mouse. And I will eat them here and there. Say! I will eat them anywhere! I do so like green eggs and ham! Thank you! Thank you, Sam-I-Am.
And so it will go with future technologies. They’ll be radically different from today’s norms and experiences, people will feel repulsed, but when they try them, they’ll often stick. Previously monstrous and sacrilegious ways to being and behaving are quickly adopted as the norm if they fulfill our drives more effectively and reliably.
Take this short list of potential radical changes as an example:
People may adopt artificial intelligence romantic partners instead of human ones – not merely for sexual needs, but for more rich and selfless emotional support and encouragement (see: Artificial Intimacy).
People may pursue power and ambition by immersing themselves in AI-enhanced VR/AR environments that help them focus for extremely long periods of time, and renew their energies more reliably/effectively in order to increase overall productivity vastly beyond that of an un-augmented human (see: Ambitious AI).
People may enter purpose-built chambers for long-term VR-immersed experiences, spending almost all waking hours in special pods to facilitate this immersion (see: Husk).
People who radically augment their minds and become vastly more creative, more intelligent, and even more sentient than regular humans may rightly believe themselves to be more morally valuable than un-augmented humans (see: More Valuable).
I’m not telling you to adopt these technologies. I’m not telling you that all of them will come to pass – or even if they do – that you will enjoy them.
But I am saying that many of them will be ubiquitously adopted. But to think none of them will be would be silly.
People will try them – and often – they will like them.
You will try many of them, and you will like many of them. Your “sacred” ideas will not hold in the face of the forces that drive tech adoption (drives). And believing that they will is deception, and leaves you in a poor position to adapt to what’s coming.
You will like them in a boat.
You will like with a goat.
You will like them in the rain.
You will like them in a train.
Just as your grandparent’s ardent beliefs about what “sacred” or “wrong” (premarital sex, Uber, 14 hours-a-day of screen time, etc), the future doesn’t care much about your ideas of the “sacred” or “wrong.” People adopt technology when it fulfills their drives. Yourself included.
This adoption is soon to result in radical changes to the human condition that it behooves you to look at squarely.
As a friend I ask you to un-cross your arms, and un-furrow your brow, and open your mind and your eyes.
Note: This article was inspired in part by Doug Lewis, one of my Twitter followers, who used this analogy in a Twitter convo with me (here).
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…
Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories: Given…