Human-Robot Relations and the Power Struggle – A Perspective from Dr. Kevin LaGrandeur

Author and Professor Dr. Kevin LaGrandeur of the New York Institute of Technology was an early embracer of Internet technology in the 1990’s, specifically interested in the utility of digital technology and how it could enhance education.  Combined with his love of literature – he’s an English professor – these divergent interests led to an exploration of the presence of artificial intelligence in early modern literature, the illumination of the human psych, and the resulting ethical implications.

In researching for his book, Dr. LaGrandeur found references to automata back to the time of Aristotle and even before.  He remarks that the likely first reference to an android-like being can be found in Book 18 of Homer’s Iliad.   In a visit to Thetis’ palace, the reader bears witness to golden serving maidens that are able to function as humans, in addition to an “R2D2”-like robot that serves as a mobile serving platter and responds to human demands.  The ancient Greeks were clearly ahead of their time in considering the ramifications of humans co-existing with artificial intelligence – Aristotle refers back to the Iliad discussing politics and the topic of slavery.  This foundation let to the central theme – the idea of artificial slavery – in Dr. LaGrandeur’s book, Androids and Intelligent Networks in Early Modern Literature and Culture: Artificial Slaves.  “Automated humanoids or humanoid like robots, in their (the ancient Greeks’) mind comes up in connection to slavery every single time”, remarks Dr. LaGrandeur.  The ethical ramifications are even more relevant today.

Humanities and Philosophy are, by nature, intricately connected; Dr. LaGrandeur points out that most, if not all, of western literature before the 19th century had some connection to the Judaic and Christian Bible and morals.  Literature wasn’t worth reading if there was no ethical or moral consideration.  As a modern humanist, Dr. LaGrandeur sees the responsibility of modern-day intellectuals as contributing knowledge that can be used beyond a particular domain or field, to apply what we know in order to help the world on a more global scale.

What we think of as new approaches or attitudes towards technology are not really ‘new’, says Dr. LaGrandeur.  The types of technology-enhancing preoccupations that we are obsessed with today – broadening our horizons; making humanity stronger and more powerful; and superseding nature – represent an archetype that extends back early in humanity, and that seems to be built into human the human psych.  In the modern era, science fiction may be the most prevalent genre where these ideas are explored; however, once again we see examples arising even in ancient mythology – Prometheus steals fire, the most basic technology in the ancient world, from the Gods; and those who try to step into the territory of Gods, remarks Dr. LaGrandeur, usually end up being punished.

Dr. LaGrandeur expresses the idea that there are two sides to the coin; as we expand our powers and enhance ourselves, we simultaneously reduce ourselves, giving away part of our agency and ability to affect the world.  Volition and decision-making are two key terms, which Dr. LaGrandeur attributes back to the Father of Robotics, Norbert Weiner – “When we give away our decision-making capabilities to robots or intelligent technology, we’re giving away essentially our souls”.  This is the fire with which our progressing society plays today.  “Let’s say I had a robot butler, and I could delegate authority…how much would I actually give away?”  Dr. LaGrandeur describes giving away responsibility after responsibility – first, tracking food in the refrigerator, then taking care of all finances, and while we’re at it – buying flowers for a significant other.  The ultimate question is, where do you cross the line between delegation and dependence?  Robots will make decisions based on programming, but there is no guarantee that as they evolve, their actions will always be in sync with human demands.

Dr. LaGrandeur references Bill Joy’s lengthy essay, “Why the Future Doesn’t Need Us” , which appeared in the April 2000 issue of Wired.  Joy makes the astute observation that we always overestimate our programming abilities.  In developing parts of these technologies, we are often unaware of how they may become part of a grander and more complex system.  His reflection on his own software development contributions are embedded in his concerns for the implications of the ever-quickening pace of innovation.  At the very least, shouldn’t we be asking ourselves how we can best co-exist with this new technology?  And shouldn’t we proceed with a little more caution?

Thousands of years ago in literature, we see the first visions of humanoids that can do the things that humans don’t want to do.  Our forefathers expressed worries that seem to be echoed in today’s cautious forward thinkers – how do you guard against the roles of slave and master being reversed in human and machine?  In a future epoch of co-existence with intelligent robots, how will our interactions evolve and be managed?  This question hasn’t been overlooked by South Korea, which proposed a Robot Ethics Charter in 2007.  The charter, apparently still being drafted, is intended to prevent misuse of robots by humans and vice versa, in anticipation of “a robot in every South Korean household by 2020.”

This code of ethics was inspired by Author Isaac Asimov in his short story the Runaround, in which Asimov proposes his famous three laws for robots.  An uncanny notion, though at the time he might have been a bit short-sighted in not addressing humans’ responsibility in interactions with robots – only how robots should treat humans.  These types of questions have led to the development of the field of “machine ethics”.  Editors’ Edward Carr and Tom Standard at the Economist share an interesting discussion on the ethical implications and societal ramifications in the face of increasing machine intelligence and decision-making capabilities.

But the question still remains, how long can humans maintain control over robots, particularly when it seems we need to relinquish increasing amounts of control through programming in order to allow robots and machines to have more autonomy and advance human progress?  Control is almost always an illusion, a realization surely known by those at the forefront of developing and using technology.  Even for the majority of the human race, which doesn’t have malicious intents in the development of such technologies (quite the opposite), Pandora’s box has indeed been opened, and this will inevitably lead to misuse and abuse by humankind.  Our foresight and intelligence will be tested by humans’ collaborative ability to predict, hedge, and counteract dangerous situations.  A closing thought: “Power practiced without learning to control” is the resounding message in a story that goes back almost 2,000 years – and there’s no handing-back the hat once the technology crosses the threshold of human imagination.