Does it Matter if Future AI is Conscious?

I spoke recently at a United Nations and INTERPOL event in Singapore, on the topic of artificial intelligence use-cases in law enforcement. One of the only other Americans at the event – and a fellow speaker – was Thomas Campbell. Thomas spent many years in high-level government tech roles, and from 2015 to 2017 he was the first National Intelligence Officer for Technology (NIO-TECH) with the National Intelligence Council (NIC). His INTERPOL presentation touched on a blog post that he wrote (linked below), in which he addressed the development of human-level artificial intelligence and super intelligent AI.

As much as I like current AI use-cases (indeed, covering near-term applications is what I do for a living at Emerj), readers who know me well know that my unabashedly ultimate interest lies in posthumanism, and the ethical considerations of “what’s after people”.

My Qatar Airways flight to Boston is 12 hours, and I have two tabs up on my browser: Gmail, and Tom’s post. Without wifi, I got to both reading and thinking, and decided to reply to a portion of Tom’s article that addresses consciousness and artificial intelligence. Tom’s quotes and my reflections below. You can read the full post on Tom’s site below:

https://www.futuregrasp.com/kardashev-scale-analogy

All of the indented and bolded blocks in this article come from Tom’s article directly, here’s the first one:

General AI may be the most difficult Type to achieve, and also the most difficult to recognize. A fascinating development in AI research is the debate among ethics experts and philosophers on what characteristics would be needed to have an AI qualify as General AI. Some thinkers take the position that General AI would only be achieved if the AI becomes conscious or self-aware. [15] We take the tact here that General AI could be achieved with or without actual machine consciousness. ­

This seems to be a safe assessment. While I think it would be a screaming shame if humanity was unable to understand consciousness at a high level than we’re able to do today – it is certainly possible that AI could take off without much direct insight on consciousness. Machines at this “General AI” level would certainly still change life on earth drastically – whether they have a rich inner experience, or whether they be as aware as toaster ovens.

In “How to Create a Mind”, Kurzweil argues that we will treat AI as if they are conscious as long as they seem to be conscious. This is probably the case, as Tom seems to argue as well.

As an analogy, one may debate whether animals are truly conscious. Do they recognize the passage of time beyond base instinctual reactions to weather and seasons? Do they recognize themselves as separate organisms with potential for independent actions? However one answers these questions—and the answers may be species-dependent—one would be hard-pressed to argue that animals are not adapted to their particular environment and their survival. Thus, they are core actors within their respective ecologies, whether or not they are self-aware.

There are laws against cruelty to animals – and there are no laws against cruelty to toaster ovens. It seems somewhat obvious that – as far as we can pragmatically tell – a certain degree of neural activity and some semblance of a brain. While it is possible that rocks and trees are conscious, we don’t seem to have any robust evidence that they experience wellbeing or suffering in the same way as, say, a pig or a parrot.

While our understanding of consciousness might develop further, it’s ridiculous to suspect that animals don’t have some degree of sentient experience – particularly fellow mammals. Consciousness isn’t the ability to set out on volitional plans – it’s simply the ability to experience qualia, to experience some proxy of suffering or joy or sensory experience in an “internal movie” the way that we have – and that plants and rocks probably don’t. There’s amble reason to believe both that animals are conscious, and that their wellbeing holds moral weight.

Also – being “actors in their environments” is entirely irrelevant.

There is nothing relevant outside of consciousness. Environments are wholly irrelevant unless there are conscious agents to experience them.

The reason we care about how animals interact with an environment, is in reference either to their own experience or wellbeing, or is in reference to their impact on other sentient entities in said environment.

We might concern ourselves about how the rabbit population in Australia interacts with Australian fig trees. We don’t really care about the unconscious fig trees – however – we care about how the ecological balance of trees, rabbits, and other living (conscious) life.

Human beings who express concern for “the environment” aren’t concerned with “saving the granite”, or “improving the living conditions for the molten magma under the earth’s crust”. The only reason that we care about “saving” or “improving” or “preserving” the environment, is so that we (sentient beings) and other living things (sentient beings) might have a chance to live, and to hopefully have an improved quality of life.

Such may be the same with a General AI. Would a General AI with all the trappings of clusters of Narrow AIs, even without consciousness, be indistinguishable from a human? Would it not also dramatically influence our environment? A recent publication in Science explores this point:

“We contend that a machine endowed with C1 [i.e., global availability consciousness, in which there exists a relationship between a cognitive system and a specific object of thought – ‘Information that is conscious in this sense becomes globally available to the organism; for example, we can recall it, act upon it, and speak about it.’] and C2 [i.e., self-monitoring consciousness, ‘…a self-referential relationship in which the cognitive system is able to monitor its own processing and obtain information about itself.’] would behave as though it were conscious; for instance, it would know that it is seeing something; would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans.” [16]

Given the fact that sentience seems to be the bedrock of moral relevance (i.e. What else matters if there is nothing to determine the “mattering”? Would one prefer to live in a world of virtuous beings who are all in extreme conscious suffering?), it would be a terrible shame if the next 20-30 years brought no substantial improvement in our understanding of how consciousness arrises and functions.

It is indeed plausible that human level AI or even superintelligence might exist with no inner experience, no self awareness whatsoever.

While it might be possible to say that it doesn’t matter whether General AI is conscious or not, in fact it does matter. As individuals, we are solipsistic – we only know our own experience and our own feelings and our own memories, and we might argue that other people don’t really matter to us at all. Of course for most of us, other people matter immensely. Not only do we care about the people close to us – but most of us have a concern for human beings around the world whom we’ve never met, and many of us even have concern for future generations of humans and animals who haven’t even been born yet.

Imagine that General AI proliferates, and for every human on the planet there’s about one General AI, which – for the sake of simplicity for this thought experiment – are all inside of physical robots. The scenario is grossly oversimplified, but let’s say that that’s 8 billion General AI machines.

Now imagine three scenarios:

  1. All of the human-level AI are lifeless computers, with no actual inner experience of consciousness of their own.
  2. All of the human-level AI are not only conscious, but are in a general state of high and pleasant wellbeing, representing a consistent “gradient of bliss” at all times.
  3. All of the human-level AI are not only conscious, but are in a general state of excruciating suffering, representing a consistent “gradient of pain” at all times.

Only psychopaths would be excited by the prospect of situation 3, and probably most people would agree that situation 2 would be an aggregate utilitarian good compared to situation 1.

Consciousness in human-level AI may not matter much for our own experience of super intelligence machines (i.e. all we know is that it makes us breakfast, or teaches us French, or drives us to extinction, whatever) – but consciousness indeed matters – and from a utilitarian perspective (the perspective that most people are coming from when they express concern for nature, or for starving children they’ve never met), nothing matters more.

As humans, I argue that guiding or directing the trajectory of post-human consciousness is astronomically more important that the details of how AI serves us in the coming 10-15 years – a point that I attempt to put into a nutshell in the last 4 minutes of my TEDx at Cal Poly.

The video below starts at 13:39 –

Tom’s overall point, however (as far as I can tell) isn’t to question the relevance of sentience. Rather, he’s calling out the fact that – as humans – it doesn’t matter much to our experience if AI is conscious or not. In either case, super intelligent AI will impact our lives so significantly that we need to consider the AI transition seriously. Indeed that’s the case – though I’d argue that we should consider consciousness to be a preeminent concern in creating anything smarter than ourselves.

 

This article is part of a broader theme of “Reflecting on What I’ve Read” – where I consider and challenge the ideas of my friends in the AI ethics world, and from authors and thinkers who I’ve read recently. If you have suggestions for topics I should consider for future AI and ethics-related articles, feel free to use the contact form here on DanFaggella.com.

Header image credit: fineartamerica.com