The Singularity – Cart Before the Horse?

The artificial intelligence (AI) field is full of forward thinkers; in the midst of moving ahead, some are particularly grounded in addressing the very real philosophical issues that continue to persist in the world of AI.  Dr. Karl F. MacDorman, Roboticist and Researcher at Indiana University, is a “healthy skeptic”, specifically when it comes to embracing the idea of achieving an intelligence that surpasses humans’. As Dr. MacDorman voiced in a recent interview, “I think a fundamental question is…whether we have a kind of post-human future”, certainly one of the foremost questions on the minds of all scientists and followers of AI.

As Dr. MacDorman explains, the quest for immortality assumes a metaphysical position – is consciousness something that can be realized in different media outside the human form? If we duplicate every neuron in a human brain and encase it within the body of a machine, does this make the machine conscious?

If the answer to such questions is a speculative “yes”, then we attribute or base these ideas on information processing theory, which (in a nutshell) assumes that the cognitive processing system of information – input and output – is all that’s necessary to achieve a level of consciousness in an entity. Of course, humans have particular motivations in pursuing such questions. Dr. MacDorman effectively notes that humans, by nature, are meaning makers, and many of us look to transfer our presence beyond life through some form of immortality, a concept inherent in many religions i.e. the form of the soul in an afterlife. Dr. MacDorman points out that even some atheists pursue a form of “everlasting life” through other forms of expression, with Freud and his writing as one example.

Dr. MacDorman describes two ends of the spectrum in thinking about immortality – the party that believes we cannot achieve immortality, because building machines with human-like intelligence is an impossible feat, and an opposite party – one that believes we can undoubtedly achieve immortality through machines, that human beings are extremely complex organisms that have the ability to self-replicate. Both require a leap of faith, says Dr. MacDorman. While he may fall somewhere in the middle, he questions Ray Kurzweil’s idea of the Singularity.

This theory assumes that “we’re going to reach a point at which computers have achieved a human-level of intelligence and then from that point on…they’d be in a kind of god-like intelligence.” Dr. MacDorman’s concerns lie primarily in the qualitative differences between machines and humans.

Computers can do many things that humans can’t do – manage the Internet, for example. But for something like the Singularity to take place, a shift in the qualitative would need to take place.  At present, Dr. MacDorman believes there hasn’t been enough work done on AGI to understand how or what kind of qualitative shifts would need to take place in machines to really achieve a human-like level of intelligence. The computer Watson, for example, may be able to answer at light-speed a question about the Gettysburg address, but this interpretation of symbols does not necessarily signify intelligence.  Watson cannot physically manipulate, by picking up the Gettyburg address for example, or make meaning, by spontaneously recognizing its historical significance. Ken Jennings, the trivia whiz who went up again Watson in the game show Jeopardy, makes a case for the value of human knowledge in comparison to machines in this TED talk.    

Dr. MacDorman poses two fundamental problems – the Symbol Grounding problem, which assumes intelligent action is originated in a symbolic system and every symbolic system is capable of intelligent action (Loula, A, and Queiroz, J., 2008).  In AGI, there is still the necessity of finding a stable, representational form from which to build a human-like intelligence. Then there is the Frame Problem, which asserts that in the world of a robot, surroundings are not static, and that forcing robots to adapt to modifications presents a string of problems. Though there was much work done in this area in the 1980s and 1990s, Dr. MacDorman believes this problem is still relevant today.

Dr. MacDorman explains that there exists a tension between too much freedom, which leads to the Turing Tarpit, and systems that can perform complex tasks with human intervention, but that fail when encountering unaccounted for changes in the environment. As John Searle drew attention to in his famous “Chinese Room Argument” (1980), one thing constructed symbol systems lack is a key ingredient that many include in the recipe for intelligence – ‘intentionality’.  This intentionality is rooted in an ability to understand language, to ‘think’ and make meaning.

This argument led to the further development of many other theories, one being Brook’s (1990) Physical Grounding Hypothesis, which asserts that machines should be built from the bottom up, using simpler processes that begin to interact with a real and complex world and to form causal relationships. This theory is just one that led to the ideas of situatedness and embodiment, concepts embraced by scientists such as Dr. Ben Goertzel in the creation of intelligent robots. Researchers Rolf Pfeifer and Matej Hoffman at the Artificial Intelligence Laboratory at the University of Zurich, Switzerland also make the case that we need to look beyond refining AI and revisit the nature of computation to accurately incorporate the influence of environment.

Another fascinating and relevant avenue of research that looks at meaning making from another angle is the interconnection of language systems, humans, and technology. Underlying the theory of Symbol Grounding is Semiotics, the study of how certain things come to represent other things to someone, a theory attributed to the late C.S. Peirce. MIT Professor and Chief Media Scientists at Twitter, Deb Roy, has spent the last decade focusing on how people connect words to physical and social contexts. Through the Cognitive Machines Research Group, he and his students have pursued related questions by building robotic and computational models merged with large-scale data analytics, with one goal being to create autonomous systems that learn to communicate in human-like fashion. If we are to ever create a human-level of intelligence, Dr. MacDorman emphasizes the need to continue to seek the answers to these and other fundamental questions that may further reveal the science behind the qualitative nature of intelligence.