Taking “Human” Out of Consciousness

What if the brain was no longer the only container for rational, volitional consciousness?

I will not here explore the topic of whether or not this technological feat is achievable (though personally I believe that it is likely to be just as achievable as mapping the human genome, as putting a man on the moon, or as cloning a sheep – given a long enough time horizon), but rather, what it might be like – and what direction this consciousness might lead itself to.

In this brief post, my aim is to explore some points about what “consciousness beyond humanity” might imply.

Separating Consciousness from Humanity as Such

The term “anthropomorphism” refers to the attribution of human qualities to non-human beings or objects. Not only do we have a tendency to name our cars like people, to talk to our pets like children, and to have our stories of God involve us being made in his image, but we also tend to intertwine conscious experience with our own conscious experience (for what other perspective do we have).

Functionally – for most of us – this tendency has no immediate detriment to our lives. In looking at possibilities for consciousness, however, it will serve us to leave these human constraints.

Because future intelligences will be created by humans, and probably modeled in some way from our hardware (brains), anthropomorphism will – to some extent – be inescapable. We can only stand on the shoulders of what has been, and amazing creative inventions – from chess to jazz to rocket ships – were build with the insight of from further set of ideas.

However, the very point – some would argue – of creating intelligences more capable than ourselves is to construct them in ways that go beyond our capacities – just as we have done with all other tools. Just as the rake is not modeled with the limited size of the human hand, and the computer is not modeled with the limited memory ability of a human, so consciousness will likely leak beyond the boundaries that we can presently experience – and indeed even imagine.

Here I will list some of the elements of human consciousness that we are likely to associate to “consciousness” itself – though this is not necessarily so.

a) Beyond the Five Senses

An “intelligent” conscious machine would most obviously seem to be one that can speak to us, see us, listen to us, and maybe even taste, smell, and feel just as we do. However, what are some other senses that such an intelligence might have? Our five senses developed to suit our survival needs, and we might suspect that a super intelligent AI wouldn’t be limited by the biological lineage that we humans share.

In a world where this degree of complexity can easily be programmed – the answers are limitless. Obvious first thoughts are augmentations or enhancements to human senses (again, our natural tendency is usually to relate to our own experience – for what else do we now have?). Such examples might include infrared vision, heat vision, or the ability to detect chemicals like a dog.

Further extensions relate to the animal world, such as an ability to sense light on our skin, not just our eyes (like some octopus species), or to be able to sense electricity in the water (like a hammerhead shark), or detect fire with a unique chemical sense (like some species of beetles), or to have an internal GPS system like some birds and butterflies seem to have.

However, it could get a lot more “far out” – to extents that we don’t quite understand right now (as we become aware of new phenomena and possibilities, many more potential “senses” could arise). Think – for example – of the ability to detect densities of stored information in the computers around you (like a dog sniffing for bone), or for the ability to sense the psychological wellbeing of animal life around you, or any other myriad senses that might be created through picking up on signals from the world around us.

Nick Bostrom‘s 2015 TED talk covers this idea of exploring the total space of conscious experience about 15 minutes into the talk (here’s a link to that video, starting at 15 minutes in).

b) Beyond Current Emotions, Innate Drives, and Desires

When we think of intelligent machines, we often imagine these machines to have similar drives as we might see in human beings. For example, a striving for power, or a need to be loved and related to, or the pleasure of art, or a need to learn new skills and grow.

Emotions might be thought of in the same way. Would we expect an artificial intelligence to “feel” jealousy, enthusiasm, anticipation, grief, etc…?

Again, because machines will initially be modeled after man, it should then be noted that initial intelligences will likely be programmed and constructed to model some rudimentary forms of human “feeling” or even goal-orientation. So, the first artificial consciousnesses might in fact enjoy being related to, and might in fact feel “anticipation,” “excitement,” and whatever other emotional experiences we initially determine.

However, there is no reason to believe that the intelligence will stop there, and there is even less reason to believe that the feelings or drives of future intelligences will relate at all to what we can understand as such from our own experience. As opposed to necessarily being void of emotion, or of necessarily feeling human emotions, adaptive intelligences of the future will likely have the ability to access levels of “emotional” or “sensory” richness and complexity well beyond our imaginations – having “feelings” and “drives” that we would have no way to articulate coming from human experience.

c) Beyond a Physical Presence

We are conscious, and we have bodies. However, it has long been supposed that consciousness may not imply any particular kind of embodiment. French philosopher Maurice Merleau-Ponty would argue that thinking cannot happen without physical embodiment and expression – but some other scientists and thinkers may argue that thought may not require a body at all. Just as there may be persons completely paralyzed yet still awake and aware, there may be intelligence, sentient artificial intelligences who have no robot “body,” and it would not necessarily be a humanoid one if they did.

Indeed there may not even need to be a central “computer” (think: desktop) at all in order to house or bring about a sentience. With enough connected circuits and different databases, a conscious system may become “self aware” very much on it’s own (indeed popular scifi books have been written about the internet “waking up” in such a way).

Expansive Sentient Potential

For me to call the possibilities of consciousness as “scary,” “amazing,” or “interesting” would be imposing my own subjective, human lens on the matter. The possibilities are not necessarily any of these things (though all three seem like good descriptors to me).

The more important matter is a consideration of what unfettered consciousness might truly imply. What in my mind seems certain, is that as soon as computer to human interaction becomes comprehensible at a “human” level – it will quickly transcend beyond anything we could possibly imagine or relate to.

I do not believe that it is likely for there to be a vast number of “enhanced” humans roaming the earth, living otherwise “normal” lives (washing their car, driving their kids to school, gardening on the weekends).

Once we are capable of enhancing ourselves, the limitations of human experience, human capacities, and human life will likely fall by the wayside. We might choose to transport our consciousness into a blissful virtual reality that in no way resembles human life, human forms, or anything “human” at all. We may choose to not rely on others for happiness, or to set our default emotions to “joyful” at all times, or to lose the necessity of sleep, or to add a third arm coming out from our chests, or any other kind of expansion beyond “human” that might be interesting, or suited to our needs.

I believe that just as abortion is now generally accepted, and prosthetic limbs are not generally accepted, and brain implants are more-or-less generally accepted, these “playing god” technologies will similarly be “no big deal” at a certain point.

At this point, the notion of “human” may lose the comforting and wholesome form that it has now, and may be seen as another fetter to our capacities – like using paper instead of a computer – or another fetter to our thinking – like nationalism in politics, or the notion of the “role” or women.

I am not by any means touting what’s wrong with the human condition as it is – but merely stating that by consensus most people may believe that it should be overcome. I devoted my 2015 TEDx talk to this topic entirely. My stomach still turns in knots every time I imagine the possibilities of humans transcending their present embodiment or capacities altogether. It’s not a comfortable feeling – and I have a sense that it will not be too long thereafter that humanity and all the comforts that I know now will be wiped out. The thought that all my cherished relationships, reading by the fire, or walking in the woods – would be meaningless and antiquated (as primitive and lowly as ants following a pheromone trail, or hermit crabs find a shell).

Partially, this is a rational fear or a power that may very well be the greatest threat we’ve ever faced. Similarly, it is partially an instinctive “pulling away” from the unknown, and fear of loosing what is now. In a way, it is as logical as the fear or nuclear energy was in its time, and in another sense, it is as illogical as the writer who dreaded the transition way from their beloved, old-fashioned type-writer.

Emerson has his wonderful phrase in his essay Self Reliance:

The man must be so much, that he must make all circumstances indifferent. Every true man is a cause, a country, and an age; requires infinite spaces and numbers and time fully to accomplish his design…

It is possible that the man is right – that our consistent efforts to extend ourselves (with tools, with language, with computers, with machines) doesn’t ever stop – and that life itself has a kind of yearning for it’s next phase that we aren’t able to hold back.

There are a number of possible end-game scenarios of what this cognitively enhanced condition might be like – and I’ve outlined the cognitive enhancement scenario I consider to be most likely in a previous essay.

Where We Go From Here

There is no succinct conclusion to this work, but only the provocation of thought. I believe that we are on the brink of the most potentially amazing and the most potentially disastrous time in human history.

The prospect of tinkering with and altering consciousness itself, has moral implications far beyond nuclear technology, internet technology, or anything else that has ever been developed.

Sentient life is the rational experiencer of the universe. Without the minds and senses we have, we might image (though we could be very wrong), that the universe would have little or no way of knowing itself – other than through the limited and relatively simple senses of other animals.

There seems to be no more grand moral implications than the very enhancement, magnification of, expansion of, or very creation of sentience itself – and I believe (as I’ve mentioned may time in this blog), that the conversations that deserve to take place on these topics, deserve conversation and collaboration from all fronts of science, art, and politics – to ensure as best we can that we do not allow this transition beyond humanity to be accidental, and that we calibrate it to it’s best and most beneficial use.

That ideal is not known, and cannot be known outside of vigilant and collaborative discernment and careful movement forward.