Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
“Digitized and digested” is the future scenario which I consider to be a likely long-term future for humanity, should we survive the initial growing pains of the singularity. By long-term, I could mean 100 or 10,000 years – though the real timeframe is likely to lie somewhere in between.
The scenario is as follows:
Phase 1: Digitized
At some point, cognitive enhancement makes shared physical existence between humans dangerous, because drastically different sets of values and mental capacities make peaceful coexistence more or less impossible. Immersive VR eventually gives way to mind uploading and full-blown San Junipero-style existence in a virtual world.
As humans enter a digital substrate to live in an expansive virtual world (hopefully full of blissful experiences of many types that we can’t possibly imagine), there will be a group of enhanced humans – or more likely a superintelligent artificial intelligence – which will manage the computational substrate that houses the human simulations.
Phase 2: Digested
Eventually, the substrate manager (be it AI or enhanced humans) will have what it believes to be better or more productive uses in mind for the computational resources that are being used to house the many human simulations. At this point, the substrate manager will remove the human simulations (ending their conscious experiences) and use that processing power for other ends which it deems to be more important (possibly inventing a new technology, exploring a new part of the galaxy, calculating a massively complex problem about physics, etc).
There are many other potential future scenarios for the trajectory of intelligence – but if we care about the continuity of individual human consciousness (though it’s not completely clear that under all circumstances we should) – then “Digitized and Digested” seems to be about as good a scenario as could be conceived.
Assuming this kind of mind uploading is possible, there will be extreme conflict (likely violent military conflict) around who controls the substrate that houses this human intelligence.
Most of humanity will be willing to enter this uploaded state in order to exist blissfully and carefree, while a small minority of humanity will see the tremendous opportunity for control and power that a transhuman transition affords them (see: “Lotus Eaters vs World Eaters“). It is this latter group that will control the substrate that houses AI, and houses the majority of uploaded human sentience.
Given how drastically ideas and instincts around morality have evolved from apes to humans – and how frequently human values cause conflict – it is essentially impossible to suspect that an ever-expanding superintelligence would be interested in sustaining human life as a major priority. I’ve argued (in a short essay called “the Moral Singularity“) that an expanding intelligence is likely to change and mold its equivalents of what we call “morality” (which will be distantly beyond human comprehension) many times over – sooner or later doing away with uploaded humans.
With any luck, however, the uploaded humans would be able to experience many years of blissful, expansive experience before the superintelligence that controls their computational substrate decides to use that computation for more worthwhile ends. This would not be unlike the fate of many species over the previous billion years of life on earth. They made their mark, they spawned forth new forms, and eventually, those new forms pushed out the old.
Note: I’m in no way trying to frame the “digitized and digested” scenario as the most likely scenario, nor am I certain about the possibility of mind uploading. Rather, this scenario is used to represent a likely best-case scenario (from the perspective of humanity) of the post-human transition.
Image credit: Visit Norway