Threshold vs. Scalar Moral Status in a Post-Human World

I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter. In the chapter called “Prejudice and Moral Status of Enhanced Beings,” Julian Savulescu, Director of the Oxford Uehiro Centre for Practical Ethics, covers the moral conundrum that I’ve many times mulled over, namely:

We don’t care much about flies, rats, or even cows – and so we use them pretty exclusively for our own ends (with respect to cows and rats) or feel just fine in killing them (with respect to flies). If a species from another planet, or an intelligence created on our own planet was as superior to us in it’s computational / physical / volitional capacities as we are to flies, would it not be granted the same privilege to treat us (humanity) in a similar way as we treat other creatures?

The Scalar Model

If we are given moral status in a “scalar” fashion, because we have, say, 5% of the computational capacity of these new life forms – then our being treated like cattle depends on our relative traits and qualities. Maybe the aliens are much stronger, bigger, faster… but not so much to see us as fruit-fly equivalents. Under these conditions, we might be saved.

For Savelescu, this scale is generally – at least for us as humans – based off of a perception of what a “person” is (personism) – such as rationality, compassion, a concept of self and identity, etc…

Threshold Model

In the Threshold model, there is a “line drawn” as to what constitutes a creature that is not disturbed, and what constitutes a creature that might be used however we need to use it for other ends. In other words at a certain level of self-awareness, empathy, rationality, etc… we shall be granted the moral status to have us be left more or less to our own devices, rather than to be either eliminated to used as a resource for more capable beings.

For Savelescu, this model likely leaves us un-modified humans safest in the advent of a post-human world, or an intelligent species arrival.

Potential Problems with Either Model

Scalar Improvement Imbalance:

One obvious problem with the Scalar model is the potential problem of relative growth between ourselves and the species or being who is higher than us. Lets say we (generally) have 5% of the “valued” qualities of alien species, yet those qualities for us more or less top off at adulthood. With an alien or super-human being, we might imagine that continued improvement would be much simpler, and much faster.

In other words, if 5% is the relative percentage of intelligence, compassion, etc… is required for us not to be treated like slave animals, food, or experimental rats, then we are likely in “big trouble” from the continued disparity between us and the species or beings above us. As their intelligence grows and grows (whether we speak of human-created superintelligence, or a species from another planet we can probably expect their rate of improvement of personal capacity to be higher than our own), our relative percentage of “valued” trait might sink to 3%, 1%, or .004%, at which point – assuming this species is using a scalar model – we’d be lunch, waste, or worse.

This has been referred to by Professor Hugo de Garis as “the species dominance question“, and it’s one that I believe will dominate global politics in the next 15-30 years. On the TEDx stage I’ve grappled with “enhanced humans” being more valuable than “regular humans”, and future, super-sentient AI being more morally worthy than humans.

My instinct tells me that in time, no species will get a free ride just for being “special”, and that the process of creative destruction that brought about humanity will eventually consume humanity on its way to whatever is next, whatever is stronger, whatever is smarter.

Even with great efforts put to forestall this process, in a long enough time horizon I think the best hope for humanity is either to:

  1. Upload our minds into blissful simulations (a la Black Mirror’s San Junipero, but much more intense, expansive, and blissful)
  2. “Bow out” graciously… dying off on our own accord and not being wiped out by machines forcefully

I’m not saying that I hope that these scenarios will befall humanity, but I feel that they may be the best that a feeble and weak species can hope for in the long term.

Morality / Ethics of Other-Worldly Being(s) as Questionable in Itself:

In my opinion, it might be very bold indeed to pose that any intelligence from another planet would have a system of ethics that might value individual conscious entities the way we seem to hope they will. A species advanced enough to make it to us – from what Kurzwiel poses (in an argument that seems rather logical) is not likely to “land” in a space ship – but will likely be capable of near light-speed travel, and will be more likely to send drones or explorer-bots than to come here to land themselves – i.e. the 1996 movie Independence Day.

It seems like tremendously wishful thinking that a vastly superior intelligence would have a philosophical conception of “Ethics” that would “click” just right so that we little humans are safe and sound. I would argue that especially if this intelligence was not built by man (for we can imagine that an intelligence that was in fact built by man would have some kind of safety measures built in to – at least initially – keep us safe from it), our most likely result is to be eliminated, treated like any other vastly less-intelligent matter that needs to be leveraged for the sake of the intelligence itself – and it’s betterment. In a much more robust paper on this topic Oxford philosopher Nick Bostrom concurs.

Shifting Conception of Ethics / Right Action:

If in fact a super-intelligent species or entity does come to engage in a friendly fashion with man (which seems most likely if we construct this intelligence ourselves), it seems blatantly false to suppose that this would be some kind of eternal, sheltered harmony.

A super-intelligence, first of all, would be capable of a much higher, much deeper, much more rich conception and theory of “right and wrong.” This – we might imagine – might end up being as simple as some particular end goal (IE: expand exponentially), but more than likely (given its tremendous intelligence) will be guided by trillions more considerations than we as human beings are capable of understanding.

With this being said, as the super-intelligence multiplies and comes to assimilate more and more information, and expand itself / improve itself continuously, we might imagine it’s conception of “right and wrong” to shift as well (just as Chimpanzee ethics and human ethics have varied over time).

Think of how our own opinions on ethics (and nearly all other matters) have shifted over the course of our own lives, and then potentially magnify that by trillions of times. This kind of learning seems to make “ethical shifts” as or more inevitable than they are for human beings. One day, we may be factored to have inherent value to the super-intelligence, the next day (or microsecond), it might be deemed best for us to be eliminated. At this point we might be to this intelligence an equivalent of what fruit flies are to us. I have argued that a phronetic AI (an AI calibrated to discern moral good) may be a worthy first attempt at superintelligence.

Concluding Thoughts on Moral Status

I see immense value in the distinctions made by Savulescu with regard to different moral perspectives on the value of life – as well as the distinction of “personism” (as distinct from “Humanism”) which he lays out in this portion of Human Enhancement (2009).

The concept of scalar versus threshold moral status is one that I particularly aimed to explore here – and it’s a conversation that I believe is of great importance – especially in factoring into the human construction of superintelligences. In this case

I pose, however (and of course, the conversation has infinitely more potential for development), that:

a) Supposing any non-earthly intelligence to find us innately valuable and not destroy / use us for their betterment seems ridiculously unlikely

b) Even our own human-constructed superintelligence would hardly seem to imply our safety since it’s conception of “ethics” would be so different, potentially so much deeper, and probably even more prone to shift and adjust than our shaky human moral ideals.

In either case, I believe that the idea of the arm of a little green man or the robotic arm of a super-intelligent omnipotent computer coming down to sign some kind of moral agreement to keep humanity safe would be extremely wishful thinking, and that in the long-haul we will share the fate of 99.9% of all species to ever come into existence: Evolving into something else, or becoming extinct.

The most important application of this idea – I pose – would be taking these points of moral status into consideration thoroughly when constructing a super-intelligence of our own.

The cause, then, would be to determine the trajectory of intelligence and sentience itself.

 

Header image credit: jurist.org