Fries on the Pier – Overcoming Human Understanding and Values

A friend shared this cartoon with me recently:

Birds, Singularity
Image credit: FalseKnees

I laughed… but in a somber way.

It’s funny because the second bird is totally limited in his actions, thoughts, and values by the paltry amount of brain that he has in his silly skull. He may be a nice bird – and his abilities serve him well for survival, but any of his ideas about values or nature are just reflections of his limitations, and little else.

But you and I, dear reader, have silly little skulls, too, with limited brain space. We may be good and well-intended people – and our minds serve us well for survival, but any of our ideas about values or nature are just reflections of our limitations.

In an impending era of artificial general intelligence (AGI) and cognitive enhancement, it makes sense to look squarely at our limitations, and ask how – in spite of them – we might strive for something worthwhile.

I ask:

What goals could we set that aren’t laughably limited to our meagre hardware and software?

In this article, I’ll aim to answer that question.

We Flawed Little Hominids Know Almost Nothing

Everything we love and value is the equivalent of french fries: arbitrary and relevant only to our own small natures.

We exist as a tiny spec in a potentially infinite state-space of possible minds. We have more intelligence and potentia than a seagull – but in the grand scheme, it’s not by much.

Some people have posited theories about the meaning of the life and the purpose of AGI. Here are just a few that I’ve read or heard over the last few years:

  • Love is a kind of cosmic force, and as we learn to avoid conflict with ourselves and our planet, we’ll use technology to create more of that loving “energy” to populate the galaxy.
  • Humans have arrived at wonderful values like justice, equality, and liberty, and these same powerful values will continue their trajectory of “goodness” into future forms of intelligence, to expand into the universe carrying these true and just values.
  • The universe itself eager to “wake up” and become sentient – opening its metaphorical eyes to senses and thought, just as biological life has done.

The values that we hold highest, like “Love“, “Happiness”, “Relationships”, “Creativity”, “Exploration”, “Evolution”, are – for the most part – proxies to some evolutionary drive, or some need that correlated to survival or mating. It could be argued that these values are hardly “our own”, but are just more nuanced versions of the same drives that lemurs or orangutans have.

Not only our values, but our understanding of space, time, consciousness, etc are likely all merely silly little seagull-level equivalents to whatever is “real” or true. Positing this hypothesis on Twitter garnered some interesting responses:

In addition, all human constructs are limited by the hardware and software of our monkey suit, our hominid form.

They are but fries on the pier.

Just as your dog can’t learn about the nuances of Montaigne’s essays, and your goldfish can’t appreciate the complexities of nuclear non-proliferation efforts, you (and me, and all humans) can’t possibly imagine most of what could possibly be valued.

We can’t even imagine the higher, further, deeper, or more varied modes of “valuing” that a future intelligence would have, or the capabilities it would have, or the experiences it might feel (it might have experiences vastly beyond what we know of as consciousness).

Our Highest Aim – Reaching for More Understanding, More Potentia

Let’s be grateful for the understanding and the brain we have. I’m glad I wasn’t born a seagull or an earthworm… but:

  • Be not married to those understandings. Don’t pretend that your understanding of math, of light, of physics – are “true” in some ultimate sense. Vastly greater minds will laugh at our ideas – just as we laugh at much of the “science” and “medicine” of the 18th century. Eternal problems like “the heat death of the universe” may be totally avoidable by an AGI – just as going to the moon seemed like an eternal impossibility to most humans in 1900.
  • Remember that understanding has been overcome a trillion times. Through the evolution of species, and with humans, the evolution of ideas. But we can only “evolve” ideas large enough to fit in our skulls. And that’s only 0.0000000….1% of all possible ideas and concepts.

Let’s be grateful for the values we have. Some were passed onto us, some were impacted to us through great books, or important lessons in life… but:

  • Be not married to those values. Let’s not pretend that our idea of “love” somehow permeates the universe in some permanent sense. Let’s not pretend that our favorite pet political philosophy is self-evidently fair and good – and it’s principles will be upheld my AGIs with a million times our intelligence. Future entities will pity at our values – as we might pity reptiles who can’t understand language, or feel romantic love, or appreciate a poem.
  • Remember that values have been overcome a trillion times. Through the evolution of species, and with humans, the evolution of values over time. But we can only “evolve” values that fit well with monkey hardware and monkey software. And that’s only 0.0000000….1% of all possible values.

The meaning of the Singularity, the purpose of AGI, as far as I can tell, is to figure out what the hell is going on.

To expand our mind-space and capability-space beyond the hardware and software that keeps our ideas and values at a french fry level.

Imposing our feeble, half-baked hominid onto an AGI system would not be flourishing – it would be stifling – just as it apes somehow got to impose their values forever into humans before humans evolved.

The best AGI would not be a calcification – of our human values and understanding – but a blooming beyond them.

It would not try to “do” things humans can think of, but to create something with the ability with infinitely more ability to “do.” It would not optimize for some “value” that was cooked up in human skulls, but would expand infinitely it’s ability to “value” and set aims.

That kind of an AGI would be a Worthy Successor: A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) control the government, and determine the future path of life itself.

This would imply vastly expanding Potentia: All the possible ways that an entity can behoove it’s own survival. This includes the total range and capability-space of an agent, everything from the capacity for abstract thinking – to physical camouflage – to fangs and claws – to the use of language – and beyond.

This seems to be what nature is doing already: Bubbling up not just more complexity – or more “intelligence” – but more potentia (which is different from intelligence in that it encompasses all possible capabilities of an entity).

Anchoring my goals – or the goals of posthuman intelligence – off of human understanding would ridiculous, and a tragic, horrible fetter to the AGI.

That seagull who considers fries on the pier as the meaning of life is worthy of our pity. We pity a worldview locked into the frame of it’s paltry hardware and software. That animal lives in a dead end. It identifies the boundaries of the world, and the boundaries of it’s rightful goals – with the boundaries of it’s little brain. It suffers either from ignorance (not even considering other possibilities besides it’s immediate experience) or hubris (from thinking that its bird brain has mastered understanding and meaning in life).

As a human, I will study the limited human-understandable science I can study, I will enjoy the limited human-enjoyable things that hold meaning to me in this little hominid life of mine (love, conversation, friendship, walks in nature) – but setting them as my boundaries – or as the “meaning” of life – would be pitiful.

The only goal we can be comfortable setting with our fettered minds is the goal of expanding “mind” (potentia) and life itself – in order to set higher goals and do higher things.

Reader – let us not suffer from ignorance or hubris like the seagull.

Reader – let us not be pitiful.

I asked at the beginning of this article:

What goals could we set that aren’t laughably limited to our meagre hardware and software?

The answer, I posit, is:

The goal of creating something that vastly surpasses all the limitations of our hardware and software. Something that could set aims as far beyond our imagination as the greatest human goals are beyond the imagination of earthworms. Something whose understanding was as far beyond our own as the net total of all human knowledge is beyond the comprehension of seagulls.

Nibble on the fries, comrade, but keep thy eyes on the horizon. There are things beyond the pier, beyond fries, beyond our ability to comprehend. Do you not burn to unlock them?

 

Header image credit: “Western gull” Wikipedia page