Hugo de Garis is one of the first AGI thinkers that I came across in 2012, when I decided to focus my life on the post-human transition. Aside from Bostrom and Al-Rodhan, few thinkers molded my early ideas about AGI and transhumanism more than de Garis.
I believe that two of his ideas are extremely important, and are somewhat absent in most of the artificial general intelligence conversations today (and even most of the discussions from 2010-2014). Those ideas are what de Garis calls “Globism” (global world order) and “Cosmism” (the belief that humanity should create diety-level machine intelligences).
The following screenshot is from de Garis’s (seemingly neglected) online blog:
Since first exploring Kurzweil’s ideas in The Singularity is Near, it seemed evident to me that the default mode of technology development would be competition – the economic or military “state of nature” – and that conflict is extremely likely if new forms of thinking and valuing come into existence.
If this is the case, then humanity must get on the same page about “the two questions“:
1. What is a beneficial transition beyond humanity?
2. How do we get there from here without destroying ourselves?
I hope dearly that the United Nations develop a Sustainable Development Goals for (a) determining a shared end-goal of post-human transition, and (b) uniting humanity in a safe transition to the post-human condition. This is probably too much to ask, though I wish I had been able to address these topics directly when I gave my last presentation (on deepfakes and AI security concerns) at United Nations HQ. I suspect it’ll be a few years until I’m invited to talk about the grand trajectory of intelligence.
This almost inevitably will involve global transparency and security regarding emerging tech, and it will almost inevitably require some kind of shared cosmopolitan plan of how we transition beyond humanity in a reasonably safe way. No safe way is evident, but anything other than the state of nature (i.e. the US and China build AGI and cognitively enhanced soldiers and compete for supremacy) seems to be preferable to the state of nature.
Only some kind of global governance structure could achieve such an aim. It seems remarkably unlikely that humanity will pool its efforts unless it experiences a dire and horrible tragedy that forces us to realize that we need to “get on the same page” about the end-game of intelligence, and a shared human vision thereof – or perish. I fear that fear alone could do the job of this kind of uniting.
Literally every TEDx talk I’ve presented has ended with this same call to action. Namely, some form of “We should facilitative international discourse to determine where this AI stuff goes, guys, because the default isn’t pretty.”
Kurzweil, the supreme optimist, doesn’t address the arms race concerns directly, and doesn’t seem to be concerned with them.
Hugo is concerned, however, as I think more of us should be.
I believe that de Garis is right, and that global world order (or war), and post-human artificial intelligence (or war) are both near-inescapable in the coming 50 years.
In the long run, if we are to survive the coming waves of emerging tech (hundreds more years), we’ll need global world order.
In the long run, if any remnant of the legacy of homo sapiens is to survive (billions more years), we’ll need to create a kind of greater, vaster intelligence which can further discern nature, discern the good, and perpetuate itself into the galaxy. In the words of Lucretius, to “[Raid] the fields of the unmeasured All.”
Whether or not we can live out a few more good centuries (or decades), and whether or not we can pass the baton properly (whatever that means) – is yet to be determined.
Header image credit: Adam Ford’s YouTube Channel – https://www.youtube.com/channel/UCEne3aAgie57IvOZKKSxi9Q
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Ideals Have Taken Us Here Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?…
In the coming decades ahead, we’ll likely augment our minds and explore not only a different kind of “human experience”, we’ll likely explore the further reaches of sentience and intelligence…
1) Augmentation is nothing new Until recently, “augmented reality” seemed to only have a place in video games and science fiction movies. Though at the time of this writing, “AR”…
I’ve been cajoled into watching Netflix’s Black Mirror, and a friend of mine recommended watching the San Junipero episode next. As I mentioned in my last Black Morror reflection, and I…