Cognitive Enhancement Will Yield Conflict – VR or Mind Uploading a Necessary Transition

Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of odds, most of us assume (likely myself included) that there simply must be some kind of super-intelligent species “out there somewhere.”

One of the many postulations made (the book is more than worth reading), is that species might – at the point of attaining a certain degree of capacity or intelligence – destroy themselves. Could be bombs, could be nanotechnologies, could be super-intelligent computers – but something batters them back to the stone age – or worse.

The Intelligence and/or Transhuman Arms Race

In thinking recently on topics related to ethical enhancement and human enhancement in general, I came to the notion that this “self-extermination theory” might pan out in some other interesting and less considered ways. Generally, the fear is a nuclear holocaust, a super-intelligent computer making killing machines like in the “Terminator” movies, and nanotechnology grey goo eating up all the atoms in our atmosphere.

None of these endings should be taken lightly, and all involve a kind of arms race. If the US creates nuclear weapons, then other countries will create nuclear weapons. You know – just in case. Which of course doesn’t make the US want to make less, but more of them. If a super-intelligence is given sufficient ability to enhance and continue to build upon itself, there seems to be a kind of threshold where that now-sentient machine can grow in its capacities exponentially, and possibly use us for the molecules and atoms we contain to aide in its continued growth.

With Cognitive Enhancements, We’re Unlikely to Get Along

It seems that if the ability to enhance human cognition and general abilities (physically, mentally) would pose serious social issues and also subsequently “enhance” a tremendous number of social problems. Humans, in my opinion, are less likely to agree when they are one hundred times more intelligent than they are a present (one thousand… or one million)… or when they think differently, value things differently, have vastly different goals, etc.

If in America, certain intelligence enhancement technology become possible (never mind legal), it seems very likely that this information will make its way to other parts of the world. If China makes legal these technologies that were illegal (but possible) in America, we’ll probably only accept being “dumber” than China for so long before we aim to catch up. If their super-smart scientists and businessmen start running circles around us, it’s unlikely that we’ll simply accept inferiority. Russia didn’t accept inferiority when America created the atom bomb – and the international game of one-upmanship isn’t really about showing off, it’s about survival.

Developing a “non-arms-race dynamic” for future developments in neurotech and artificial intelligence is – as far as I can tell – among the most critical tasks of the 21st century. Whether we like it or not, such an international agreement or consortium would be a stepping stone on the path to post-human intelligence.

With cognitively enhanced leaders in charge of nations or organizations, disagreements on entirely new philosophical, intellectual and political levels will likely take place more rapidly than at present. This could be said to be the case even if we were “enhanced” in a specific and very calibrated way that led us to get along with others and share a common moral ground or sense of Libertarian “so long as you don’t harm me”-ness.

If one country or party in power sees an opportunity to better themselves or their lot by not conforming to these “get-along” enhancements, they might be able to swoop in with aggressive goals in an effort to take a commanding position as everyone else is playing nice (a dynamic that I mention overtly in my 2014 TEDx talk about cognitive enhancement).

Though there is the possibility of squelching all wrong-doing and vice through enhancement (though I would argue that this would be a difficult task), this is not to say that anyone will abide by those standards of enhancement, or that even perfectly benevolent sentient beings can’t have irreparable differences that lead to some form of conflict.

If international relations teeter on war today (when humans all have more-or-less the same mental makeup), imagine how much conflict would arise with different leaders having different and unique cognitive enhancements – improving their moral intuition, planning ability, memory, and creativity in varying degrees and directions. This is why I posit that cognitive enhancements (beyond an initial, harmless level) are untenable for human survival. I suspect that the physical world would simply be too fraught with conflict if we enhance our minds in different and diverging ways – hence my alternative theory of “The Epitome of Freedom” (a mind-uploading scenario).

Virtual Reality and Mind Uploading as an Escape Hatch

With this being said, there seems to be a low chance of eliminating grand conflict when multiple mega-intelligences are vying for influence at once. If we are looking for our respective freedom with regard to the creative ways that we enhance ourselves (as we make a decision today about how we “customize” other areas of our inner and outer lives), it seems difficult to imagine a peaceful world or co-habitation.

If we seek a kind of custom or tailored enhancement, this may require a simulated reality where we are able to live in the labyrinth of a mental world, where we could choose to experience what we wish, when we wish, how we wish. This would allow for the creative impetus of humanity to expand in whatever direction it likes – without interfering with or encroaching on the wellbeing of other sentient entities.

Though, who’s going to step into the virtual reality willingly? I argue many will, once sufficient evidence supports its utility in the attainment of desired ends (happiness, longevity, richness of experience, etc…). However, if a computer virus was able to attack all of the conscious beings trapped in these machines, that might be a fate worse than hell (a San Junipero-like scenario gone wrong). In addition, such a virtual world would have to be managed by someone or something in the physical world.

I’ve argued that the struggle to control this physical hardware that houses the majority of human experience (whch will at some point be virtual) will be a gargantuan power struggle.

In the remaining part of the 21st century, all competition between the world’s most powerful nations or organizations (whether economic competition, political competition, or military conflict) is about gaining control over the computational substrate that houses human experience and artificial intelligence. (Full article: Substrate Monopoly – by Daniel Faggella)

What seems to make the most sense – though it’s implementation will be tremendously difficult – would be the close monitoring and examining of all consciousness-altering technologies, ensuring that their roll-out is not premature/inadequate, destructive, or otherwise dangerous. Easier said than done.

Though, the regulation of technology will be much harder once everyone with laptop can create virtual realities and clone mammals.


Header image credit: Lawnmower Man (1992 film)