2013 / 21 August

Machine Consciousness in All it’s Flavors – Dr. Peter Botluc


“It’s alive!” The famous words of Dr. Frankenstein still ring in our ears as we imagine an inhuman, giant green man sit up strait, arms stretched forward.

Though the scene incites fear of the dangerous and dark power of the monster. More than, anything, I think that the scene should incite a sense of responsibility. It seems reasonable to argue that the topics which hold the most moral weight are those which involve the destruction of – tinkering with – and creation of – consciousness itself.

Dr. Peter Botluc is a Philosophy professor at the University of Illinois, whose most active work remains in the field of machine consciousness. In out July 2013 interview, I was able to discuss Dr. Botluc’s well-informed distinctions and predictions about machine consciousness and consciousness itself. That conversation, coupled with his article called “The Philosophical Issue in Machine Consciousness,” were the fuel for this article.

Types of Consciousness

Dr. Boltuc’s work aims to distinguish various “types” of consciousness to further refine and understand consciousness itself. The three types distinguished in his paper above are functional, phenomenal, and hard consciousness (respectively: f-, p-, and h-consciousness).

Functional consciousness implies an intelligible response from a system. Phenomenally conscious beings are said to experience “qualia” or sense-perception (sight, sound, etc…). A hard conscious being can be said to subjectively experience those senses with awareness.

Dr. Botluc claims that any machines might be said to be functionally conscious today, in that they can respond to stimuli in appropriate ways to attain an intelligible end. Right now, systems are being created around the world to develop phenomenally conscious machine systems (such as the robot “Rolling Justin,” which has learned to catch a thrown ball). At present, there is still great debate over whether or not any machines actually are phenomenally conscious, and the general consensus seems to be that “hard” consciousness has almost certainly not been created within a machine.

Below is a video of Philosopher David Chalmers explaining the “hard” problem of consciousness, and his suppositions of what it means and implies about the nature of consciousness itself.

A Clash of Distinctions

Dr. Botluc’s own approach to understanding the various types of consciousness relates to his own definitions of the terms. Some well-known consciousness philosophers view p-consciousness “in the non-functional way only, while I leave the notion of p-consciousness to the internal functional uses and introduce the notion of h-consciousness as the non-functional concept.” Botluc’s own views differ:

“(Ned) Block’s more functional understanding of p-consciousness seems to designate something of heuristic value — the first-person functional description. For instance, such definition is used in the description of LIDA robots, which are defined as what those authors call phenomenally consciousness and distinguish from only functionally conscious AI architectures.

While the authors identify phenomenal consciousness with the subjective experience of “qualia”, in fact they claim that adding a mechanism to stabilize the perceptual field might provide a significant step toward phenomenal consciousness in machines”. It seems to me that their view misses something but also that their project pertains to something of not only practical but also epistemological importance. Such a mechanism would enhance quality of phenomenal consciousness, if the latter was already present (this way the step is significant) but I doubt whether the mechanism would help explain, or produce, phenomenal consciousness.

My criticism is shared by Haikonen who claims that the loss of stable perception does not lead to the loss of any kind of phenomenal consciousness” but merely to some deformations within its content and that \thus stable perception cannot be a cause for phenomenal consciousness”. The distinction between cognitive architectures that satisfy numerous features of subjects, such as stability of the perceptual ¯eld, and those that do not, is a helpful practical distinction that does deserve its own term, needed in AI and related disciplines, but it is not the narrow meaning of p-consciousness advocated by Haikonen, Chalmers and others.” (Quoted from Dr. Boltuc’s paper here)

Philosophy of mind may not be the easiest subject to wrap your head around, but the moral implications of the distinctions and understanding of consciousness are tremendously heavy. This is particularly applicable in a world moving more and more towards the enhancement and creation of sentience.

Might it be the case that “consciousness” will never be “solved”?

Could be. My hope is that the world of Botluc, Block, Chalmers and others will be furthered and tested by the findings of science, and will help us make meaningful decisions about serious ethical considerations of the present and the future.

Thank you again to Professor B., you can find his academic page here at the University of Illinois.

-Daniel Faggella

PS: Prof Botluc’s slides on Non-Reductive Machine Consciousness can be found here.

No comments so far.

LEAVE YOUR COMMENT