I’ll begin this short entry with a supposition that might seem bold:
In the remaining part of the 21st century, all competition between the world’s most powerful nations or organizations (whether economic competition, political competition, or military conflict) is about gaining control over the computational substrate that houses human experience and artificial intelligence.
By “computational substrate that houses human experience”, I mean:
The servers and computers that host our digital lives – which increasingly is our entire lives (our work, our communications, our records and notes, etc). As augmented reality and virtual reality become ubiquitous, the systems that support these virtual spaces will essentially be “worlds” unto themselves, virtual “places” where people spend most of their time immersed, more important than physical reality.
By “[computational substrate that houses] artificial intelligence”, I mean:
The computers where the algorithms and data is stored, and/or where the processing is done, in order to enable AI.
As the next decade rolls forward digital space, digital work, digital [everything] will become preferable (and often objectively better) than the “real” world. “We are going in”, as I like to say, and the digital world will continue to be more and more prevalent.
Similarly, as the next decade rolls forward, artificial intelligence will play a roll in all of the software we interact with, and potentially many of the decisions made by large governments and organizations.
The computing power that houses (a) digital human experience, and (b) the power of artificial intelligence, will, year-by-year, be the most valuable physical “stuff” on earth.
If the bulk of virtual experience – the “digital life” that people live – is hosted and served by a single company, they will wield tremendous power over the wellbeing of users, the productivity of users, and even the political opinions of users. The more time people spend immersed in these environments (talking with friends in digital space, doing all of their work in digital space, etc), the more influence will be wielded, and the more valuable data the owning organization will harvest.
If the bulk of the most powerful AI solutions (and the data that goes along with them) belongs to one country or government, they will wield a dramatic advantage over competitors of all kinds. As AI becomes more capable, this is more and more the case. As tech giants win the “winner takes all” game of AI, this power will continue to concentrate.
I do not suspect that this competition will be open to the lesser powers (less powerful nations, or smaller and less powerful companies and organizations), but in the strata of the most powerful, all competition will be a proxy for controlling the substrate.
The End Game of Substrate Monopoly
The “end game” of all of this competition over the substrate would be something like this:
- Early Game: Sometime in the decades ahead, as a large percentage of humans live 50% or more in immersive virtual worlds, it might be possible for a nation, company, or consortium to own (through creation or acquisition) most of the processing power behind these digital experiences.
- End Game: The physical world is only nominally important, essentially all humans live in some kind of immersive digital environment, enabled by brain-machine interface, and the hardware to support all of their experiences is owned / controlled by a single organization or group.
- In this last phase, there is essentially no competition, and the “winner” is dominant in a most definitive way. When there is no one to physically confront this victor, and when the victor could hypothetically control not only the AI and physical robots in the world, but also the digital experiences of humanity. This would be “playing God” in a most literal sense, and is – as far as I can see, the most far-reaching
There are some of us who believe that the “End Game” scenario is reasonably likely within our lifetimes. There are others who believe that only the “Early Game” is reasonable in the coming 15-40 years. In either case, the moral considerations are grand, and the consequences for the concentration of power are significant.
In either case, the substrate is becoming more important than the physical world, and the rules and experiences and treatment of people within digital platforms will at some point surpass the impact of the rules and experiences and treatment of people within states. I paint the picture of the transition from the physical world to the computational substrate in greater depth starting at 8:21 into my TEDx talk at the University of Rhode Island:
What “matters” is experience itself. Perception is reality. Whatever wields influence over experience (and can cause it to suffer or experience joy), whatever is experience in and of itself – matters.
Already Google is considered by namy in the world of international relations to be more powerful than many states. Imagine Google, but through a virtual reality world in which people work, play, and experience most of their “reality” through.
Zuckerberg didn’t buy the VR innovator Oculus for no reason, and owning the human experience is certainly on the radar for firms like Facebook are well aware of the power that will be afforded to companies who have a more direct influence over the day-to-day experience of human beings than the countries that they live in.
Eventually, nothing will “matter” except for the “matter” (substrate) that houses intelligence and sentience itself.
Substrate Monopoly – Precedents Today
We have some current precedents for concern about this “substrate monopoly” idea, including:
- Facebook’s open platform being so immersive and influential that it can allow foreign parties to sway the elections of the most powerful nation on Earth
- Google’s total monopoly on online search, gobbling up the world’s data about user intent/customer behavior for a single private firm
- DJI (reported to be the world’s largest drone manufacturer) allowing the Chinese government to access the data from all of it’s cameras, possibly pooling a huge percentage of the world’s drone security footage into a single repository
I’m not suggesting that some of the data / AI giants of today (such as Tencent, Amazon, Facebook, Google, or the governments of China and the USA) are in some way malicious. There is a quote from Bonaparte’s memoirs, which goes something along the lines of: “Of those who rail against tyranny are many who long to be tyrants.” I’m always skeptical of playing the role of the “little guy who of course is the good guy”, a veil that I think is easy to look through.
It’s easy to pretend to be the “good guy” when you’re facing larger, stronger rivals – but in my estimation, most people in power do what most people would do if they were in power (read that twice if you need to). Everyone who is powerless wants to tell the other powerless people “Hey, if you put me in office, I’ll take care of you, instead of simply serving the party or playing other games to sustain power.” In general, people are people, and people in power are people in power.
This isn’t about the moral fiber of Facebook execs, or the leaders of the US Department of Defense, and I’d prefer that this not be an “us versus them” dynamic (i.e. “the powerful are bad, the weak are good”). Rather, I see us as “team humans”, trying to grapple with the fact that incentives don’t always permit easy cooperation, and we have to hedge against concentrations of power that would permit dominance of the many by the few (in a global and technologically-enabled way).
The Zuckerbeg / Musk AI Debate
As far as I’m concerned, this is how power works:
- If you are likely to control the substrate that houses experience and the most powerful AI (i.e. If you are going to be dominant), then you’d like to keep quiet about the extent of your power, and downplay the threat that this power poses
- If you are unlikely to control the substrate that houses experience and the most powerful AI (i.e. If you are liked to be dominated), then you’d like to make noise about the extent of the power that your dominant rival wields, and the potential for this power to be abused
In terms of international relations, you could call me a structural realist for believing those two bullet points above, and maybe in some respects I am. Again, I’m not presuming that the “dominated” parties are more virtuous, they’re merely looking out for their own interests. Again, the Bonaparte quote: “Among those who rail against tyranny are many who long to be tyrants.”
Carrying this idea forward:
- Zuckerberg and LeCun (Facebook) and Andrew Ng (when he was with Baidu) predictably downplay AI risk and the dangers of artificial general intelligence. Would they be so quiet if they saw one of their rivals pulling ahead drastically in the AI race?
- Musk – a man with great power but without a good chance at owning the most powerful AI or owning the bulk of human experience (he’s in a much worse position for those goals than Facebook, Google, or Baidu) – predictably plays up the dangers of artificial intelligence, and it’s the concentration of power at companies like Facebook and Google. He has openly expressed his concerns across social media, and has created OpenAI. Would he make much noise if he believed himself likely to be at the helm of the superintelligence in a few decades?
In the century ahead gaining a substrate monopoly is the only way to become so strong as to actually become safe. Those who are aware of this dynamic and cannot obtain such a monopoly will at least want to tear that ultimate (in the literal sense) power and glory from their rivals.
Maybe I’m misled in my interpretation of these tech leaders – but I’ve laid out my position, and I happily accept critiques thereof.
Why Discuss This Now
The current Russian interference with the 2016 US election via Google and Facebook has raised awareness of the power of all-encompassing platforms to be wielded or hijacked for potentially dangerous purposes. Facebook’s ability to deliberately tinker with the emotions of users (2014) conveys the extent of it’s power as well. People are open to this conversation now, and frameworks should be established to consider the fate of our digital lives.
I’m not necessarily advocating for a breakup of big tech firms who are on the cusp of owning the experience of millions or billions of humans – but I am advocating for a discourse about what kind of future we’re building, and who it will serve. States who ignore the preeminence of the digital world and count on managing the wellbeing and defense of their citizens solely through the physical world will – and indeed already are – being outpaced and outsmarted by a better “motherland”, a motherland customized to the needs and goals of users.
I repeat: I predict that in the future, the competition between states and companies will more and more be for one reason – to control the substrate that houses human experience and artificial intelligence.
We ought to think through the competitive dynamics involved in this transition, because they’re paying out (albeit in their infancy) right now. Laying a groundwork for international relations will be necessary if we plan to keep some semblance of governance and influence in the hands of states – or if we plan to hedge against a monopoly on the influence of human minds.
Header image credit: Microsoft