A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
This week my second interview on Simulation Series is live. If you aren’t familiar with Allen Saakyan’s Simulation Series video series, I recommend checking out his YouTube playlists page.
Allen and I touched on a lot of ideas in this conversation, and I either pulled from or added to a huge number of the more popular essays on danfaggella.com. My friend Ryan was kind enough to pull out some of the best quotes from the interview itself, and I wanted to put them together into a series of “clusters” about various topics, linked closely to all of my various essays and videos that they related to.
At the bottom of the article, I like some of Allen’s other interviews you might dig. Enjoy.
Each of the quotes below comes with a time stamp. Feel free to jump around in the video below, depending on what you’re interested in:
4:20 — “I think the two main questions for humanity are (a) What are we turning ourselves into [i.e. what’s the point]?, and (b) How do we get there without all dying violently? I think there should be UN sustainable development goals for humanity moving forward beyond people, and can we create a non-arms-race dynamic to get there?”
The quotes below build on my ideas about the transhuman transition, and the motives by which most of humanity will enter more and more personalized worlds. The best essays to see my more complete thoughts on these topics are Lotus Eaters vs World Eaters and Programmatically Generated Everything.
10:00-ish — “Japan is the canary in the coal mine of the first world. You get rich, you get sad because there is no meaning, then you escape into virtual worlds… I think that’s what the whole rich first world is going to – or a lot of the males anyway.”
I’ve written an entire essay about Japan’s hikikomori, and what they represent about the first world transition to digital escape.
11:40 — “An unenhanced person maybe 20 or 30 years from now will have a zero percent chance of running the show [being powerful or in control].”
15:00 — “I think there we will be people with their physical body as a husk [i.e. living in VR, or in brain-machine interface], experiencing things through senses and through limbs and permutations of senses that humans cannot access.”
17:40 — “When the social pressure from the pragmatic perspective happens with VR [we need it to work, to interact with friends, to get things done], that’s when the ‘going in’ will begin.” (i.e. When we will begin escaping into virtual worlds
I wrote 6+ years ago that human ideas (what drives us, what motivates us) will tear us from humanity. I believe that our need to fulfill our drives is insatiable, and eventually will push us to expand our ability to know, to create, to relate, to feel, and to have experiences that humanity cannot now access. The same urges that drive us to build a company or find a life partner or visit a sacred temple will be the urges that drive us to “plug in” our brains to interfaces that allow us to do and experience vast, expansive vistas of qualia experience that we don’t now access.
11:00 — “People don’t want sleep, food, physical bodies… we don’t want to have to cut our nails… we don’t want to need relationships in order to feel happy. We don’t want any of the things we ostensibly want – we want pleasurable varieties of qualia… and any technologies that robustly fulfill that [VR, brain-machine interface, pharmacology], that’s the market – the market ends with that.”
See my 2014 TEDx at 11:10, where I posited that any technology that volitionally allows us to change our qualia to our preferences would be (a) undeniably appealing and (b) would open up moral quagmires that we aren’t even close to being able to deal with.
14:15 — “You can only combine the ideas you have – so the set of things we think we want is only so large. But there is a slippery slope into other realms… if we’re not hindered by our current hardware, there will be permutations of positive experience to which we are not now privy… just like monkeys are not privy to read Montaigne or appreciate oil painting.”
15:30 — “The way you see [VR or brain-machine interface] is with jacuzzis and Mariah Carey 1998… you sell it like that, but that’s not where it ends.”
23:00-ish — “Increasingly… you there will be programmatically friends that are just better than your friends are. And I love my friends! But you will have someone with all the wisdom of Pericles and Einstein who has none of their own goals and only wants to help you – that’s hard to compete with.”
26:50 — “Many people will live in a husk-like state where they are strapped into VR and haptic experiences… and they don’t leave. Even the excretion of waste will happen while reclined in this specific virtual environment… strapped into the world of perception – which will be the new ‘real world’.”
Explore my more complete ideas in the full essay titled Substrate Monopoly. The basic idea is that whoever controls the physical compute that houses human experience (the virtual world) or the most powerful AI will be dominant. In the West this may start with private companies, in China it would be an arm of the CCP. There will likely be multiple substrate monopolies globally – which may converge to just one.
34:30 — “Whoever manages how you ‘go in’, and whoever manages the experience you have – whoever owns the computational substrate – is the deity.”
37:00 — “There are some people who will be of the belief that if you are entering a [programmatically generated world, via brain-machine interface or super-immersive VR] world like that, and someone else has built the technology that is creating your experience, and that someone is controlling the physical substrate where all of your experience is housed… that at the very end of that day… that person is safer than you. And I think that is the right supposition.”
That safety is only found in strength is a sad and dangerous notion that I hope isn’t true (read Artificial Intelligence and the Last Words of Alexander).
Most of what we believe to be moral tenets and insights are contextual, and limited to our circumstances and (more importantly) our homo sapien hardware and software. I believe that moral theories (virtue ethics, utilitarianism, etc) are important for ourselves personally, and for governance, but that these ideas won’t hold up when we reach post-human intelligence.
I’ve written previously that the highest good that AGI could do is probably not maximizing utility, but is in discovering new and higher ways of valuing things (as we humans have discovered higher moral ideas than rodents or crickets). I’ve also argued that post-human morality is very unlikely to place much value on humanity – at least not for long enough to make us safe in a world of superintelligent AI that has its own evolving sense of what to value and how. My TEDx at Cal Poly is arguable mostly about this exact idea.
42:00 — “The monkeys can’t understand Shakespeare and then 3% genetic difference and you get humans. Whenever we do that next 3% genetic difference [evolve upward in intelligence] and ideas you and I have about morality is gobbledegook… because there will be grander modes and means of valuing and thinking that we humans can’t access.”
50:30 — “The expansion into drastically highest degrees of creativity and intelligence beyond hominids will have to happen in virtual worlds because if it happens in the physical world we [different begins with different kinds and extents of intelligence and goals] will kill each other. I’m just calling the shot on that.”
53:00-ish — “Utilitarianism in no way ensures that an AGI would keep humans happy. There are more effective ways to manufacture blissful qualia than maintaining my individual consciousness and childhood memories of playing Nintendo 64.”
26:20 — “About atoms versus bits. Atoms are just so… so… lame… you can use them to build these other worlds but like wow… they’re just so cold and uncaring. Not that bits are caring…”
55:00-ish — “The simulation argument [whether or not we are living in a simulation] is only jarring if you have believed that you live in ‘base reality’ in the first place. If you have never escaped Hume’s Fork, then for all you know everything has always been a mirage anyway.”
“Meat puppet” = Human body. I personally use the term monkey suit, but Allen’s idea is a fun one as well. Neither of us intends any disrespect to the human form – but neither of us feel too proud to make fun of the fetters of our present condition.
“Qualia catalog” = Array of all possible things we can experience. AI and neurotechnology will vastly expand this catalog and make it much easier to explore just the parts we desire. This is a nice term for “experience-space”.
Lots of them worth exploring, here are some AI-related episodes that are worth digging into:
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
In an interview with Wired about his work building a brain at Google, Ray Kurzweil was asked about his thoughts on Steve Jobs’ notion of death as a natural part…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…