Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect – the core point of this article stands:
Responding to uncomfortable truths with anything other than Acceptance robs us of the ability to think critically and act decisively – which are our only chance to create a better future.
The opportunities are great for accepting and acting on crucial trends and uncomfortable realities:
The costs are high for not accepting crucial trends and uncomfortable realities:
Leaders in China, Japan, Facebook, and Kodak didn’t have the option to freeze time and deny reality.
Yet when it comes to AGI and posthuman intelligence, we try to pretend that freezing time is possible.
Much of the current AGI discourse – even among people who identify as transhumans – is anchored in naive attempts to hide from [Denial] or rail against [Anger] impending trends around AI and the future of the human condition.
This manifests in dangerously childish head-in-the-sand perspectives on viable or realistic human futures, including the following:
This is, unfortunately, pure grade-A cope.
This leads the discourse to continuously land on the silly notion of the Eternal Hominid Kingdom – a series of far futures that imagine humans (as they are now), living on mars / the moon – living in a world where disease, food, and education are solved for everyone.
This might be an interesting vision for the next very short handful of years – but it isn’t realistic and sets us up for disaster and danger in the years of change that face us.
Uncomfortable as it is, we must Accept* the following:
1 – Posthuman Intelligence is Inevitable – soon-ish we will build and/or become said intelligence. Reasons:
2 – Posthuman Intelligence is Likely to End Humanity – even if it is we who evolve or merge with said intelligence. Reasons:
It’s relatively easy to determine where someone sits on the “Stages of Posthuman Grief” model:
Reading my “2 Things We Must Accept” section – where do you think you sit on the Stages of Posthuman Grief?
Do you write it all off flippantly as sci-fi (Denial)?
Are you upset that I’m pointing these trends out, and has that discomfort lead you interpret malice and evil as my motive (Anger)?
I’m not saying Acceptance is easy.
I have mourned. As I mention in my longer essay on inevitable posthumanism:
Source: You Don’t Want What You Think You Want – Emerj.com
But Accept we must.
Maybe we can push these radical change (like the augmentation of human minds, and the arrival of AGI) to future generations, when our generation is long gone. But I suspect that only relatively minor slow-downs are possible, and that for the most part we must work hard to form an embankment against oncoming river of the future. We can influence it’s path, but we can’t very much stop it’s flow – or how gravity effects water.
We cannot prevent the future outright with a head-in-the-sand strategy.
Kodak and 15th century China already tried that strategy, remember?
Without Acceptance – we cannot think critically or act decisively, which are the only things that give us a fighting chance for a better future. The eternal hominid future is cope and nothing more.
From a governance perspective, are left with The Two Questions:
I’m not smart enough to answer these questions myself (no one person it) – but I feel confidently that:
But I’m willing to wiggle on any of own particular ideas as trends play out, and as I understand new things. This is what Acceptance demands.
…
NOTE: You don’t need to accept anything you don’t want to. And I don’t pretend to see the future. I think we’re all morons and neurons. I’m here to merely explain my reasoning (with references and links to by own writing, and that of other thinkers who I respect. Zuckerberg’s idea of a mobile-first social media might have been wrong – and my notions of AGI risk and posthumanism may also be wrong. I think the trends make a strong case, but you can decide for yourself.
If you disagree with my reasoning – convince yourself that you’re not in Denial (“Daniel’s ideas are never gunna happen”), or Anger (“Daniel is a terrible person because by laying out these trends it means he wants the end of humanity!”). Instead, dive into the specifics of my reasoning (linked under “2 Things We Must Accept”) and let me know where my errors lie. My ideas have changed over time, and I suspect I may have something to learn from you as well. Find me on Twitter and ping me.
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories: Given…