Sentient Potential and Stoic Ideals
In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors…
“It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be…”Isaac Asimov, 1978
Isaac Asimov, the Russian-born American author and biochemist, was onto something simple yet profound. Change is constant, and the implications are nowhere more evident than in the field of advancing Artificial General Intelligence (AGI). Dr. Ben Goertzel, American author and researcher in the field of AI, sites an early interest in science-fiction as at least partly responsible for his entry into AGI. Dr. Goertzel recalls an Isaac Asimov novel, in which people retreat into solitary worlds of virtual reality, estranged from the meaning of social relationships.
His reference to the novel sprang from an at-present unanswerable question – will the continual rise in AGI increase our cooperation and compassion for fellow beings and other intelligent forms, or will it give way to increased conflict and/or isolation? Dr. Goertzel Ben sees today’s reality of technology as leveraging more social connection than before, bringing up Facebook and other social media as modern evidence of this phenomenon. Dr. Keith Hampton, an Associate Professor at Rutgers University, has research interests in the relationship between information and communication technology, to include the Internet, social networks, and community. In a paper published in 2011, Dr. Hampton et al. investigated the relationship between social networking sites and our lives. The results led to some general conclusions: social media users are more trusting; have more close relationships; get more social support; and are more politically engaged. While these findings are still in their infancy, this idea is also supported by the media equation, a theory developed in the 1990s by Stanford researchers Byron Reeves and Clifford Nass; the two researchers used a collation of psychological studies to form the foundation for their overarching idea that people treat computers, television and media as real people and places.
The intersection of these findings has interesting implications for the future, and they beg the question – what role do social relationships and collaboration have in the future of AGI? In the book Social Intelligence, (The New Science of Human Relationships), American author and researcher Daniel Goleman illustrates new findings in neuroscience that show the brain’s design makes it sociable; we form, subconsciously, neural bridges that let us affect the brain – and the body – of those with whom we interact; the stronger the connection, the greater the mutual force. What affect might these constructed feedback loops have in human interaction with technology? Dr. Goertzel’s views seem to align with the general nature of people’s tendency towards the social in relationship to technology. He describes his vision of the future of brain-computer interfacing, with one possible result being a sort of Internet-mediated telepathy – “…if I put a brain chip in the back of my head, you have one in the back of your head, we could beam our thoughts and feelings to each other indirectly, so if you have human beings with their brains networked together like that, that would seem to enable both a level of empathy and…understanding…than what we have now”.
The OpenCog Foundation, of which Dr. Goertzel is Chairman of the Board, works on projects rooted in the vision of advancing artificial intelligence beyond human levels within the next few decades. Work on projects is done by multidisciplinary teams and individuals located in various parts of the world, which can make unified collaboration and understanding of ideas a challenge; Dr. Goertzel speculates on using this brain-computer interfacing in the formation of various ‘group minds’: picture a software development team, all sharing thoughts and understanding of codes as they work, perhaps an early-stage AGI processing system before AGI becomes much smarter than humans. “This sort of interfacing could allow us to become closer to each other and closer to our developing intelligent machines and in a way, going beyond the whole individualistic model of the self that we have now”, a speculative but potential reality. This type of interfacing speaks to the idea of an increasingly united global mind and consciousness, which presumably is a more efficient way to transfer information and feelings than at present.
The stark difference seems to lie between the notions of people’s relationships with potential greater intelligences and how people interact with today’s technology, which is responsive to human controls or more akin to being interacted with on a human-like level. What happens when we reach what Ray Kurzweil dubs the ‘Singularity’, the point where unenhanced human intelligence is unable to keep up with the rapid advance of progress in artificial intelligence? Might our social and emotional natures be our potential downfall as a species? It would seem that once this ‘Singularity’ is reached (or close to being reached,) the unenhanced human brain will be vulnerable to manipulation by more intelligent machines, which are able to instantly mine Internet data, or even the information in our minds, effectively influencing our decision-making and thought patterns.
This seems like a bleak outcome for humanity, especially to those of us who identify with the social and emotional idea of what it means to be human. How do we prepare ourselves for what may lie ahead? “We all have the tendency to become attached to specific ideas about what’s going to happen”, notes Dr. Goertzel. He describes one of human beings’ greatest challenges as the ability to stay open-minded, to be aware of the need to constantly change and adapt in the face of obsolete ideas and a quickened pace of new information, “…ultimately on the most profound level of who we are and what we are and why we are…some of our thinking about the singularity itself…ideas like friendly AI…what is friendly to humanity, what is humanity?” The evolution of human thought that will need to take place is imminent and real in the face of such difficult questions on human existence alongside ever-advancing AI.
In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors…
This article was written as an exploration – and has no initial agenda or objective outside of discovering potential interesting connections through a look at transhumanism through the lens of…
Seneca, Epictetus, Aurelius – even westerners unfamiliar with “Stoicism” recognize many of the names that brought this Philosophy to bear. Today, there are few self-professed Stoics in the world, but…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
With a lot of recent delving into the applications of AI, I had the good fortune to speak with the man behind Pandorabots.com, Doctor Richard Wallace. Dr. Wallace began work…
After getting in a fantastic interview with Professor William O. Stephens, I was connected to the www.stoicscollege.com website, where I bumped into a number of other thinkers I knew I’d…