Selfless Robots – Reflecting on Hughes’ Work in “Robot Ethics”

The past 2 or 3 weeks I’ve been digging into a lot of material I’ve found in the blog and article sections of the IEET website – and recently I stumbled onto a PDF draft of a work by Dr. James Hughes called “Compassionate AI and Selfless Robots.”

The work prompted a number of questions that I thought would be important material for future posts / future conversations in the field in general. I have firmer assumptions on some of addressed topics than I do for others, but all seemed worthy topics for debate.

The Constituents of Consciousness

Hughes mentions that in the Buddhist notion of the five skandhas:

1. The body and sense organs (rūpa)
2. Sensation (vedanā)
3. Perception (samjñā)
4. Volition (samskāra)
5. Consciousness (vijñāna)

He poses: “One of the questions being explored in neuroscience, and yet to be answered by artificial intelligence research, is whether these constituents of consciousness can be disaggregated.”

In other words – can a conscious entity exist if it lacks any of the above “constituents.” My best guess would – for several reasons – be “most likely.”

In ancient times we suspected that all things were composed of elements like fire, water, air, or earth. Later we learned of cells and molecules and atoms and atomic particles.

Similarly, in ancient times we suspect a set of elements involved in consciousness. It seems safe to assume that we haven’t “nailed” the topic of consciousness yet, and that we have many, many, many amazing discoveries about this topic ahead. It seems unlikely that our ancient guesses will have encapsulated it all (IE: the example of what the world is made of).

In addition, I would suspect that not only might a conscious entity exist without all five of the above elements, but that there are additional “components” or “constructs” of what we call consciousness, of which we have yet to discover.

It seems reasonable – from my present knowledge – to see humanity conceiving of a consciousness without a body, or even an awake, aware being without volition (our own volition is not certain, nevermind that of other animals that we consider “conscious,” like deer or moles). It also seems reasonable that there are capacities beyond the five skandhas which could also be leveraged by a conscious entity beyond mortal man. Many of these capacities I believe we are incapable of imagining at present (as the deer is incapable of imagining the joy of writing poetry).

On the Cultivation (Not Just “Programming”) of Virtues

Hughes ends this work with this paragraph: “Buddhist ethics counsels that we are not obliged to create such mind children, but that if we do, we are obligated to endow them with the capacity for this kind of growth, morality, and self understanding. We are obligated to tutor them that the nagging unpleasantness of selfish existence can be overcome through developing virtue and insight. If machine minds are, in fact, inclined to grow into super intelligence and develop godlike powers, then this is not just an ethical obligation, but also our best hope for harmonious coexistence.”

I certainly agree that they’ll (it’s funny, calling imaginary future super-intelligence machines “they”) need some kind of ethical sense in order to not destroy us or harvest our biological materials. I think that the idea of this sense being “cultivated” is important to note. Even if we are able to program a “past life” with a million moral lessons learned, a machine will still need to be able to iterate and respond to what is happening in real time in order to make sense of it’s moral life (if we wish for it to have one).

However, I am wary of the fact that once a machine attains anything close to god-like powers, it will not be remotely possible to cap it’s moral thinking to make sure that it still values “us,” or anything else for that matter. As the machine grows and changes, and iterates and processes, and aggregates data and input and engages in cognition, moral thinking, and decision making on levels we cannot imagine, it is not going to be possible to sit across a table from it and ensure that it’s still “cool” with us and not eager to use us for our parts – or simply eradicate us.

I certainly don’t have the answers either, but my hope and assumption (there we go with those assumptions again) is that conversing on these topics and understanding their importance during the construction and early workings of these machines will give us as good a chance as we can get to ensure the betterment of life through these technologies.

– – –

To read Dr. Hughes’ original article, please find the link at the top of this blog. You can find my interview with him in Episode 2: The Transition to Transhumanism.

All the best,