AI, Neurotech, and Human Irrelevance – A Three-Part Thought Experiment

I believe that it’s important to question the human-centric view of the future.

In many parts of the world, humans are increasingly aware of their impact on future generations, on other animal species, and on the natural ecosystem that supports all life. Still, when we gaze into the future, our preeminent goal is the happiness and wellbeing of hominids like ourselves.

The idea that homo sapiens, as we are today, are the height of intelligence, sentience, or moral value – is absurd. It is as absurd as imagining the Tyrannosaurus considering itself to be the most intelligent, sentient, and morally worthy creature imaginable. We – like the Tyrannosaurus – are just one temporary (and admittedly arbitrary) form, floating through time.

We forget that we are part of Lucretius’ “storm-whipped surge of life”, that not too long ago we didn’t even walk upright.

A square assessment of a transhuman and AGI (post-human intelligence) future is a square assessment of how (and if, and when) we want to hand off the baton as the dominant species in the known universe. The sufficiently far-off future is about the trajectory of intelligence itself.

Our responsibility seems to be in how we want to influence that trajectory, how we want to hand off the baton of determining the future – and to decide if, when, and how we cope with this hand-off.

Human Irrelevance – Three-Part Thought Experiment

I’ll walk through each step in the experiment with a handful of examples:

Step 1: Determine Criterion of Moral Worth

Create a list of the abilities or qualities of humanity that makes them morally valuable. Common examples might include:

  • Quality of being self-aware (this is self-evident)
  • Quality of living a relatively long life, enough to create, build, love
  • Ability to love other humans and other life
  • Ability to know ourselves, our memories, our identities, a “self”
  • Ability to relate and empathize with other humans or creatures
  • Ability to create art, to express our own interpretation of meaning
  • etc…*

Your list needn’t be limited to the examples above, but could be an essentially infinite number of qualities or abilities that hypothetically make humans worthwhile. This is not a list of qualities provided by someone else, this is whatever you personally consider to be morally worthy about humans – so select your own list.

The cheat to this exercise in human irrelevance is to say that humans are morally worthy simply because they are humans. This presumes that un-augmented, un-altered homo sapiens are the highest conceivable moral value simply because they are un-altered homo sapiens.

I consider this to be a feeble response, and a thoughtless and speciesist form of circular reasoning. A kind of magical thinking. I understand that in certain religious traditions, it might be impossible to break this view – and I can respect that view – but I certainly don’t agree with it.

If we are made in the image of the Gods, it would seem to be that of the Greek ones, with all their mixes of virtue, vice, and violence. While I don’t purport to know how to perfect the species, and while I don’t know if such augmentation is theoretically possible in my lifetime, and while I don’t hope to see any post-humans created in the next few years, it is certainly possible to imagine more morally valuable entities than we. How obscene to think otherwise.

Step 2: Imagine Vast Enhancement of Moral Worth Criterion

Imagine a future where human minds are augment-able (via brain-machine interface, via nanotechnology, via genomics, whatever), or when sentient intelligences are build-able (artificial intelligence).

Now, take your list of “morally worthy qualities and abilities.”

Imagine taking a cognitively enhanced human being, or a self-aware artificial intelligence, and imbuing said post-human intelligence with 100 times more of your morally worthy qualities and abilities than any single human could ever have.

Let’s say that your top three “morally worthy qualities and abilities about humanity” were:

  • Ability to understand the world and communicate that understanding
  • Ability to love and cooperate with one another
  • Ability to act with one’s own volition, and make an individual contribution within one’s lifetime

We could then imagine a machine or cyborg-human which:

  • Has 1,000 perceptive senses (as opposed to the 5-6 of human beings), and has the ability to pull data from the entire web instantly, and which has near-unlimited memory and near-unlimited compute power. Such a machine might be able to not only communicate the depths of nature’s truths to human beings, but could communicate in infinitely more rich and powerful and descriptive terms. Imagine the amount of insight communicated through the English language and hominid brains, and compare that with the limited amount of insight that is communicate-able when crickets rub their legs against one another. Now, imagine the same step-function upwards, vastly, vastly beyond the level of depth that humans can experience.
  • Has an unlimited ability to love any and all living things – not only through an empathetic “pulse” on vastly more creatures – but by genuinely being able to help those beings. Such a super-entity might serve as advisor and shoulder-to-cry-on for a woman going through a divorce, and at the same moment be delivering food to a family in financial hardship, and helping build nests for endangered owls. 100x post-human “Love” would probably imply modes of “Love” expression beyond human imagination.
  • Has not only 100x more volitional choice than human beings, but has the ability to change its own cognitive structures, testing out simultaneous “versions” of itself, calibrating, evolving, and building an ever-growing super-intelligence “personality” of wisdom, insight, abilities, etc – based on it’s own experimentation, and it’s own unique creative conscious impulses.

It would be very challenging to argue that such an entity would be anything other than vastly more important than a human being, pound-for-pound. The first 4 minutes of my 2014 TEDx attempts to summarize this relative moral valuation:

Step 3: Imagine Post-Human Moral Understanding

Imagine we have a policy issue to ponder. For example:

We work within a government ministry, and we have only a $1,000,000 in funding, and we must use those resources on supplies, staff training, and improvements – spread across 12 schools. How will we spend the funds to do the most good for the children and teachers?

Imagine we have a simple ethics experiments. For example:

We have one perfectly healthy person, and we have five patients who need an organ transplant to survive (one heart, one lung, one kidney, etc). Should we kill the one person in order to save the other five?

Now, suppose that we pose this question to the highest entity in the animal kingdom under homo sapiens, a chimpanzee.

How effective do you think the moral solutions of the chimpanzee will be?

Of course, the moral instincts of a chimpanzee are positively useless in this case. The concept of an “organ”, of “money”, of “skills training”, of “school” are near impossible to grasp for a chimpanzee, never mind projections into the future, and calibrating the good.

We humans have something like 3-5% genetic difference from chimpanzees. What a difference that 3-5% makes. Ask a human the ethical questions above, and you’ll be able to have a dialogue, share ideas and concepts, and come up with potential answers.

Now, imagine we take that same 3-5% DNA change (and subsequent improvement in brainpower), and we build upward from present-day humans. So, we have a post-human intelligence that is as much smarter than we are, as we are smarter than chimpanzees.

What would this kind of entity have to say about our ethical questions?

In order to imagine what this could be like, let’s imagine walking into an ethical decision of a chimpanzee. Maybe she is deciding whether or not to pick fleas from another chimp’s head, or maybe she is deciding who to share her bananas with. Now imagine if we humans came in to share our advice, and to suggest the next best action for the chimpanzee. The animal would be confused, and our words would be wholly lost on her.

That’s what it would be like for us to hear the response from this post-human. It would be wholly foreign to us – because it would be vastly beyond our own comprehensions.

Whatever we think of as “morally relevant qualities or abilities” is comparatively chimp-level. A post-human intelligence as far above us as we are to chimpanzees would have a much more rich and full perspective on morally relevant traits, and would almost certainly identify more morally relevant traits than our little minds are possibility capable of conceiving. I’ve written about this topic in great depth in an essay called AGI – Finding the Good.

Value of the Insignificance Exercise

The value of this exercise isn’t because post-human intelligence is possible now – it’s certainly not. While I suspect there is a 50-60% chance I will see post-human intelligence within my lifetime – it is possible that such a thing won’t be possible for another thousand years or more.

The question that this thought experiment poses is still relevant. Namely:

What are we (as a species) aiming for?

As a species – what are the North Star objectives we are shooting for? What are the “constellations” of future scenarios that we consider to be preferable, and what are the futures we don’t want? If we did a good job with the next fifty or one hundred (or one thousand) years of human “progress” – where would we be? What would be the case? Why and how would things be better?

Would our grand end goal be a slightly happier version of anxious little homo sapiens?

Or just more anxious little homo sapiens, building societies on Mars and the moon? Is twenty billion mortal, flawed hominids the height of our aspiration – the height of our aspired “good”?

Or are we shooting for a transition to something vastly more morally worthy, and vastly more survive-able in the open reaches of the universe?

In my TEDx at Cal Poly I explain from 2:05-4:00 a basic hypothesis of “red and blue orbs” (symbolizing sentient experience), and from 14:15-15:45 I lay out the difference between hominid-only moral aims, and post-human intelligence and bliss aims:

I posit that almost all far-future “better” scenarios involve massive increases in both intelligence and positive qualia (positive conscious experience). Whether through enhancing existing minds, or creating minds from scratch, I foresee essentially zero “better” future scenarios without a good deal of both.

More questions then arise:

To what post-human intelligence to be hand the baton of species dominance?

When and how should we create such an entity that would have vastly greater moral worth and insight than humans?

How can we – as a species – get on the same page of which constellations of post-human futures we want to – and don’t want to – move closer to?

The thought experiment has nothing to do with insulting humanity as a species. Rather, it is intended as a clarion call to attempt to foster conversation about what we’re ultimately after as a species.

What to Do About Our Precarious Moral Standing

Things either die or turn into other things.

I could quote Lucretius lavishly, but I’ve done that enough on this blog, and it does nothing more than cloth human transience in beautiful words – which makes it slightly less bothersome to consider.

We could pessimistically frame our condition by saying that all progress leads to our demise. More accurate (and more neutral) would be to say that all progress leads to whatever is after us, beyond us.

It’s ridiculous not to consider this when we consider the far future. Hominids-as-they-are are as arbitrary a species as any other, the grand trajectory of intelligence is what matters.

“A subtle chain of countless rings
The next unto the farthest brings;
The eye reads omens where it goes,
And speaks all languages the rose;
And, striving to be man, the worm
Mounts through all the spires of form.”
– Emerson, Nature
Clinging to visions of hominids a billion years from now is absurd. Should we want to map a future for technology, for society, we will eventually have to determine what values we want to shoot for – and we will likely arrive at entities that embody vastly more of those values than we humans ever could (read: Human Ideals Will Tear Us From Humanity).

 

NOTE: I am obviously not suggesting that brainstorming on North Star goals should trump discussions about near-term issues (threats to democracy, climate change, nuclear non-proliferation, etc). I am, however, suggesting that a thread of serious discourse should be considered in order to discern the North Star goals of humanity.

* Gunkel points out that Coeckelbergh refers to this as a “properties approach” to moral value. There are many other potential approaches, but this seems to be a viable and pragmatic one that best suits this thought experiment. I happen to agree with Sparrow (also cited in Gunkel’s paper) that sentience (the ability to experience qualia) is a required precursor to an entity’s moral value in-and-of itself, but I’ve explored this topic elsewhere. Sentience, however, could just be seen as a “property.” “Ethical behaviorism” (which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status) could also be deemed a “property” – and I recommend reading Danaher’s work on the subject.

Image credit: vasyapoup.deviantart