A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I believe that it’s important to question the human-centric view of the future.
In many parts of the world, humans are increasingly aware of their impact on future generations, on other animal species, and on the natural ecosystem that supports all life. Still, when we gaze into the future, our preeminent goal is the happiness and wellbeing of hominids like ourselves.
The idea that homo sapiens, as we are today, are the height of intelligence, sentience, or moral value – is absurd. It is as absurd as imagining the Tyrannosaurus considering itself to be the most intelligent, sentient, and morally worthy creature imaginable. We – like the Tyrannosaurus – are just one temporary (and admittedly arbitrary) form, floating through time.
We forget that we are part of Lucretius’ “storm-whipped surge of life”, that not too long ago we didn’t even walk upright.
A square assessment of a transhuman and AGI (post-human intelligence) future is a square assessment of how (and if, and when) we want to hand off the baton as the dominant species in the known universe. The sufficiently far-off future is about the trajectory of intelligence itself.
Our responsibility seems to be in how we want to influence that trajectory, how we want to hand off the baton of determining the future – and to decide if, when, and how we cope with this hand-off.
I’ll walk through each step in the experiment with a handful of examples:
Create a list of the abilities or qualities of humanity that makes them morally valuable. Common examples might include:
Your list needn’t be limited to the examples above, but could be an essentially infinite number of qualities or abilities that hypothetically make humans worthwhile. This is not a list of qualities provided by someone else, this is whatever you personally consider to be morally worthy about humans – so select your own list.
Monday thought experiment:
– List the properties that make humans more morally relevant than other animals
– Imagine an AI entity with 100x those same properties
– Try to form an argument as to why said entity isn’t more valuable than a humanFull essay:https://t.co/5ej6TNPgpw pic.twitter.com/8cvEWZoUY2
— Daniel Faggella (@danfaggella) July 1, 2019
The cheat to this exercise in human irrelevance is to say that humans are morally worthy simply because they are humans. This presumes that un-augmented, un-altered homo sapiens are the highest conceivable moral value simply because they are un-altered homo sapiens.
I consider this to be a feeble response, and a thoughtless and speciesist form of circular reasoning. A kind of magical thinking. I understand that in certain religious traditions, it might be impossible to break this view – and I can respect that view – but I certainly don’t agree with it.
If we are made in the image of the Gods, it would seem to be that of the Greek ones, with all their mixes of virtue, vice, and violence. While I don’t purport to know how to perfect the species, and while I don’t know if such augmentation is theoretically possible in my lifetime, and while I don’t hope to see any post-humans created in the next few years, it is certainly possible to imagine more morally valuable entities than we. How obscene to think otherwise.
Imagine a future where human minds are augment-able (via brain-machine interface, via nanotechnology, via genomics, whatever), or when sentient intelligences are build-able (artificial intelligence).
Now, take your list of “morally worthy qualities and abilities.”
Imagine taking a cognitively enhanced human being, or a self-aware artificial intelligence, and imbuing said post-human intelligence with 100 times more of your morally worthy qualities and abilities than any single human could ever have.
Let’s say that your top three “morally worthy qualities and abilities about humanity” were:
We could then imagine a machine or cyborg-human which:
It would be very challenging to argue that such an entity would be anything other than vastly more important than a human being, pound-for-pound. The first 4 minutes of my 2014 TEDx attempts to summarize this relative moral valuation:
Imagine we have a policy issue to ponder. For example:
We work within a government ministry, and we have only a $1,000,000 in funding, and we must use those resources on supplies, staff training, and improvements – spread across 12 schools. How will we spend the funds to do the most good for the children and teachers?
Imagine we have a simple ethics experiments. For example:
We have one perfectly healthy person, and we have five patients who need an organ transplant to survive (one heart, one lung, one kidney, etc). Should we kill the one person in order to save the other five?
Now, suppose that we pose this question to the highest entity in the animal kingdom under homo sapiens, a chimpanzee.
How effective do you think the moral solutions of the chimpanzee will be?
Of course, the moral instincts of a chimpanzee are positively useless in this case. The concept of an “organ”, of “money”, of “skills training”, of “school” are near impossible to grasp for a chimpanzee, never mind projections into the future, and calibrating the good.
We humans have something like 3-5% genetic difference from chimpanzees. What a difference that 3-5% makes. Ask a human the ethical questions above, and you’ll be able to have a dialogue, share ideas and concepts, and come up with potential answers.
Now, imagine we take that same 3-5% DNA change (and subsequent improvement in brainpower), and we build upward from present-day humans. So, we have a post-human intelligence that is as much smarter than we are, as we are smarter than chimpanzees.
What would this kind of entity have to say about our ethical questions?
In order to imagine what this could be like, let’s imagine walking into an ethical decision of a chimpanzee. Maybe she is deciding whether or not to pick fleas from another chimp’s head, or maybe she is deciding who to share her bananas with. Now imagine if we humans came in to share our advice, and to suggest the next best action for the chimpanzee. The animal would be confused, and our words would be wholly lost on her.
That’s what it would be like for us to hear the response from this post-human. It would be wholly foreign to us – because it would be vastly beyond our own comprehensions.
Whatever we think of as “morally relevant qualities or abilities” is comparatively chimp-level. A post-human intelligence as far above us as we are to chimpanzees would have a much more rich and full perspective on morally relevant traits, and would almost certainly identify more morally relevant traits than our little minds are possibility capable of conceiving. I’ve written about this topic in great depth in an essay called AGI – Finding the Good.
The value of this exercise isn’t because post-human intelligence is possible now – it’s certainly not. While I suspect there is a 50-60% chance I will see post-human intelligence within my lifetime – it is possible that such a thing won’t be possible for another thousand years or more.
The question that this thought experiment poses is still relevant. Namely:
What are we (as a species) aiming for?
As a species – what are the North Star objectives we are shooting for? What are the “constellations” of future scenarios that we consider to be preferable, and what are the futures we don’t want? If we did a good job with the next fifty or one hundred (or one thousand) years of human “progress” – where would we be? What would be the case? Why and how would things be better?
Would our grand end goal be a slightly happier version of anxious little homo sapiens?
Or just more anxious little homo sapiens, building societies on Mars and the moon? Is twenty billion mortal, flawed hominids the height of our aspiration – the height of our aspired “good”?
Or are we shooting for a transition to something vastly more morally worthy, and vastly more survive-able in the open reaches of the universe?
In my TEDx at Cal Poly I explain from 2:05-4:00 a basic hypothesis of “red and blue orbs” (symbolizing sentient experience), and from 14:15-15:45 I lay out the difference between hominid-only moral aims, and post-human intelligence and bliss aims:
I posit that almost all far-future “better” scenarios involve massive increases in both intelligence and positive qualia (positive conscious experience). Whether through enhancing existing minds, or creating minds from scratch, I foresee essentially zero “better” future scenarios without a good deal of both.
More questions then arise:
To what post-human intelligence to be hand the baton of species dominance?
When and how should we create such an entity that would have vastly greater moral worth and insight than humans?
How can we – as a species – get on the same page of which constellations of post-human futures we want to – and don’t want to – move closer to?
The thought experiment has nothing to do with insulting humanity as a species. Rather, it is intended as a clarion call to attempt to foster conversation about what we’re ultimately after as a species.
Things either die or turn into other things.
I could quote Lucretius lavishly, but I’ve done that enough on this blog, and it does nothing more than cloth human transience in beautiful words – which makes it slightly less bothersome to consider.
We could pessimistically frame our condition by saying that all progress leads to our demise. More accurate (and more neutral) would be to say that all progress leads to whatever is after us, beyond us.
It’s ridiculous not to consider this when we consider the far future. Hominids-as-they-are are as arbitrary a species as any other, the grand trajectory of intelligence is what matters.
“A subtle chain of countless ringsThe next unto the farthest brings;The eye reads omens where it goes,And speaks all languages the rose;And, striving to be man, the wormMounts through all the spires of form.”– Emerson, Nature
NOTE: I am obviously not suggesting that brainstorming on North Star goals should trump discussions about near-term issues (threats to democracy, climate change, nuclear non-proliferation, etc). I am, however, suggesting that a thread of serious discourse should be considered in order to discern the North Star goals of humanity.
* Gunkel points out that Coeckelbergh refers to this as a “properties approach” to moral value. There are many other potential approaches, but this seems to be a viable and pragmatic one that best suits this thought experiment. I happen to agree with Sparrow (also cited in Gunkel’s paper) that sentience (the ability to experience qualia) is a required precursor to an entity’s moral value in-and-of itself, but I’ve explored this topic elsewhere. Sentience, however, could just be seen as a “property.” “Ethical behaviorism” (which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status) could also be deemed a “property” – and I recommend reading Danaher’s work on the subject.
Image credit: vasyapoup.deviantart
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…