Reflection on John Harris’s “Enhancement Are a Moral Obligation”

I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements Are a Moral Obligation,” by John Harris.

“Fixing” and Upgrading, a Debate

My purpose here is to further outline an important aspect of the disagreement between Harris and Norman Daniels with regard to the moral obligations and dangers of enhancement (and indeed the definition of enhancement). I would have liked to have read a reply by Daniels, and even more been privy to a conversation between the two men – which I imagine would be lively – but my reflection today refers to Harris’ particular area of disagreement: That fixing human maladies is acceptable, but enhancing “regular” human qualities is too laden with risk.

From Harris’ article we gather that Daniels argues for the “okay-ing” of interventions designed to get a human being back to regular human capacities, but not in enhancing his “regular” capacities any further. To Daniels (from the perspective of Harris), the moral concern seems to be with preserving the norm, and giving others equal opportunity to experience and make the most of this normal level of capacity / ability / functioning.

Harris sees a very blurred line between “enhancement of function” and “amelioration of dysfunction,” and views the desired end of technology or intervention to be some kind of human good, a tangible benefit. I enjoyed this passage where he summarizes a good deal of his argument and his “side” of this moral issue:

The problem seems to be an unjustified assumption that normal traits are acceptable by reason of their normality and that the risks of new “treatments” are justifiable only when the alternative is an inevitable catastrophic disease. Weaken or qualify the argument so that the inevitability applies to a population but not on an individual basis, and so that normal traits are only acceptable when they are desired or beneficial and the argument revives, but only at the cost of abandoning the therapy/enhancement distinction altogether, and only if the disease is defined relative, not to normal species functioning or species-typical functioning, but relative to possible functioning.

In this way, Harris works towards his argument of enhancement as a moral imperative.

I’m not firmly on either side of the fence, though I tend to lean more in the direction of Harris’ argument than from the arguments I’ve gathered from Norman Daniels (through Harris’ work).

The Creation of “Freaks”

One of the interesting factors that Daniels mentions is the idea of enhancing a single individual as being the process – not of changing “human nature” (for the rest of the population is not genetically or otherwise altered by altering one individual), but of “creating freaks.”

From my perspective, the term “freaks” is loaded with oodles of connotations that may or may not be necessary in describing a genetically enhanced individual – but if the point of using the term was to make it stand out – to make it visceral – I think it serves the purpose.

This brings up what to me seems to be a primary concern of the transition beyond our current genetic condition, and further – beyond biological intelligence. Namely: Who will be transformed first, and how will this process of “species re-vamping” be undertaken?

With some kind of gene treatment that might be used to prevent aids or prolong healthy vision or prevent most cancers / etc… should be created, it might not be too much a stretch to see these treatments (maybe in the form of an injection, or series of injections) be “rolled out” as medicine is today already.

Rolling Out a Better Brain

On the other hand, if there is a way to alter the brain to make us not only experience a tremendously greater subjective well-being, as well as enhance our mental capacities to remember and connect information with nearly twice our current effectiveness, it would be difficult to imagine a traditional medical “roll-out.”

The signs would read: “$110,000.00 for a AS334 Fulfillment Enhancement, and $220,000.00 for an accompanied cognition capacity enhancement!”

Unlikely.

Would some very wealthy or “connected” people have access to this kind of “enhancement treatment” before others? If this were the case, would these people instantly be 100 times more suited for important roles and positions given their increased capacity? (Who wants a “regular” human to run the country when we could have a super-insightful super-human, working harder, smarter, more continuously, and more intelligently?).

Would there be a certain percentage of the human population who roam about without being “super-happy-and-smart?” We can imagine there would be tremendous scorn, envy, and resentment for the “enhanced.”

These concerns don’t even take into account the possibility of such enhancements creating severe and unexpected side-effects in the enhanced individuals – nor what kinds of evils might be done with such a technology. Creating a pre-programmed army of super-intelligent soldiers? …Creating a pre-programmed army of super-intelligent engineers to work on ways to make even more powerful super-intelligent soldiers?

This assimilation and “who goes first” problem only gets more and more interesting as this “enhancement” gets farther and farther from normal human functioning, and when the possibilities of such a capacity become less and less predictable.

Merger of Man and Machine (“Okay… who goes first?!”)

Take things to the level of merger with non-biological intelligence – for example – some kind of “exporting” of our consciousness and personality into an android of some kind… and the “who goes first” question seems to become all the more complex.

It this “merged” being (potentially not referred to as “human”) were millions of times more intelligent and hundreds of times more physically capable than any human being, it would seem as though this first being might – whether we like it or not – determine the future of humanity and sentient potential in general.

You can’t just have a few people “opt” to become super-hyper-mega-androids and then return to their day jobs as dentists or salesmen. The purpose of this kind of enhancement would almost undeniably be a sign of a shift beyond biological intelligence, or at least a shift towards a world run and furthered primarily by non-biological intelligence.

So… who’s the first “freak” (to use Daniels’ term)? What is the role of this first super-human? An experiment? A new political leader programmed to look for the good of humanity?

It is possible that some of Daniels’ resistance to enhancement may lie in all of these questions which will undeniably (in some way, shape or form) be brought to the forefront as “treatments” and “enhancements” become more and more developed, and further reaching.

Food for though, indeed… More work is certainly to be done on this “who goes first” question – and it’s one of the key areas of my own interest, as well as a key, guiding question in the evolution of these technologies, and the ethical considerations that go along with them.

I have postulated that when enhancement becomes possible, it will not be viable or safe to have new super-intelligent, super-capable humans functioning in the real world. Conflict seems to inevitably ensure from competing super-intelligences. We can’t seem to agree as a species when we have the same DNA, so imagine trying to forge a peaceful world when there are varied versions of “upgraded” humans, each with a different permutation of super-intelligence, all of whom being 100 times more intelligent than an un-enhanced human.

In my essay “The Epitome of Freedom” I mention that when grand enhancement becomes possible, it will be likely that a single intelligence will rule the physical world (avoiding conflict), and individual super-intelligent upgraded human minds will be best off existing in simulations, where no actual conflict between individuals could occur.

What do you think?