A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
A friend shared this cartoon with me recently:
I laughed… but in a somber way.
It’s funny because the second bird is totally limited in his actions, thoughts, and values by the paltry amount of brain that he has in his silly skull. He may be a nice bird – and his abilities serve him well for survival, but any of his ideas about values or nature are just reflections of his limitations, and little else.
But you and I, dear reader, have silly little skulls, too, with limited brain space. We may be good and well-intended people – and our minds serve us well for survival, but any of our ideas about values or nature are just reflections of our limitations.
In an impending era of artificial general intelligence (AGI) and cognitive enhancement, it makes sense to look squarely at our limitations, and ask how – in spite of them – we might strive for something worthwhile.
I ask:
What goals could we set that aren’t laughably limited to our meagre hardware and software?
In this article, I’ll aim to answer that question.
Everything we love and value is the equivalent of french fries: arbitrary and relevant only to our own small natures.
We exist as a tiny spec in a potentially infinite state-space of possible minds. We have more intelligence and potentia than a seagull – but in the grand scheme, it’s not by much.
Some people have posited theories about the meaning of the life and the purpose of AGI. Here are just a few that I’ve read or heard over the last few years:
The values that we hold highest, like “Love“, “Happiness”, “Relationships”, “Creativity”, “Exploration”, “Evolution”, are – for the most part – proxies to some evolutionary drive, or some need that correlated to survival or mating. It could be argued that these values are hardly “our own”, but are just more nuanced versions of the same drives that lemurs or orangutans have.
Not only our values, but our understanding of space, time, consciousness, etc are likely all merely silly little seagull-level equivalents to whatever is “real” or true. Positing this hypothesis on Twitter garnered some interesting responses:
"Well, AGI could never do THAT, because molecules/physics/etc doesn't work like that!"
No, brother.
Chimps are 97% genetically identical to us, and have no real idea of what dirt is, never mind atoms.
You think AGI won't have deeper conceptions/mastery of nature than we?
Lol.
— Daniel ‘No, Brother’ Faggella (@danfaggella) December 24, 2023
In addition, all human constructs are limited by the hardware and software of our monkey suit, our hominid form.
They are but fries on the pier.
Just as your dog can’t learn about the nuances of Montaigne’s essays, and your goldfish can’t appreciate the complexities of nuclear non-proliferation efforts, you (and me, and all humans) can’t possibly imagine most of what could possibly be valued.
We can’t even imagine the higher, further, deeper, or more varied modes of “valuing” that a future intelligence would have, or the capabilities it would have, or the experiences it might feel (it might have experiences vastly beyond what we know of as consciousness).
Let’s be grateful for the understanding and the brain we have. I’m glad I wasn’t born a seagull or an earthworm… but:
Let’s be grateful for the values we have. Some were passed onto us, some were impacted to us through great books, or important lessons in life… but:
The meaning of the Singularity, the purpose of AGI, as far as I can tell, is to figure out what the hell is going on.
To expand our mind-space and capability-space beyond the hardware and software that keeps our ideas and values at a french fry level.
Imposing our feeble, half-baked hominid onto an AGI system would not be flourishing – it would be stifling – just as it apes somehow got to impose their values forever into humans before humans evolved.
The best AGI would not be a calcification – of our human values and understanding – but a blooming beyond them.
It would not try to “do” things humans can think of, but to create something with the ability with infinitely more ability to “do.” It would not optimize for some “value” that was cooked up in human skulls, but would expand infinitely it’s ability to “value” and set aims.
That kind of an AGI would be a Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.
This would imply vastly expanding Potentia: All the possible ways that an entity can behoove it’s own survival. This includes the total range and capability-space of an agent, everything from the capacity for abstract thinking – to physical camouflage – to fangs and claws – to the use of language – and beyond.
This seems to be what nature is doing already: Bubbling up not just more complexity – or more “intelligence” – but more potentia (which is different from intelligence in that it encompasses all possible capabilities of an entity).
Anchoring my goals – or the goals of posthuman intelligence – off of human understanding would ridiculous, and a tragic, horrible fetter to the AGI.
That seagull who considers fries on the pier as the meaning of life is worthy of our pity. We pity a worldview locked into the frame of it’s paltry hardware and software. That animal lives in a dead end. It identifies the boundaries of the world, and the boundaries of it’s rightful goals – with the boundaries of it’s little brain. It suffers either from ignorance (not even considering other possibilities besides it’s immediate experience) or hubris (from thinking that its bird brain has mastered understanding and meaning in life).
As a human, I will study the limited human-understandable science I can study, I will enjoy the limited human-enjoyable things that hold meaning to me in this little hominid life of mine (love, conversation, friendship, walks in nature) – but setting them as my boundaries – or as the “meaning” of life – would be pitiful.
The only goal we can be comfortable setting with our fettered minds is the goal of expanding “mind” (potentia) and life itself – in order to set higher goals and do higher things.
Reader – let us not suffer from ignorance or hubris like the seagull.
Reader – let us not be pitiful.
I asked at the beginning of this article:
What goals could we set that aren’t laughably limited to our meagre hardware and software?
The answer, I posit, is:
The goal of creating something that vastly surpasses all the limitations of our hardware and software. Something that could set aims as far beyond our imagination as the greatest human goals are beyond the imagination of earthworms. Something whose understanding was as far beyond our own as the net total of all human knowledge is beyond the comprehension of seagulls.
Nibble on the fries, comrade, but keep thy eyes on the horizon. There are things beyond the pier, beyond fries, beyond our ability to comprehend. Do you not burn to unlock them?
Header image credit: “Western gull” Wikipedia page
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
In an interview with Wired about his work building a brain at Google, Ray Kurzweil was asked about his thoughts on Steve Jobs’ notion of death as a natural part…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…