The “Grand Trajectory” refers to the direction of development of intelligence and sentience.
If the following two hypotheses are true:
- The moral worth of an entity can be measured – as best we can – by the richness of its intelligence and sentience (consciousness). Humans have a wider range of emotion, creativity, memories, and capabilities than chimpanzees, and so are seen to be more morally worthy. The same can be said comparing monkeys to rabbits, or rabbits to salamanders, or salamanders to beetles (I’ve argued this general point at the start of my 2014 TEDx).
- Higher levels of intelligence and sentience are likely to be created from augmenting human minds or creating superintelligence in a digital substrate (AI) – thus creating more morally valuable sentient entities than anything currently on earth.
Then the following is almost certainly also true:
- Determining the trajectory of super-human intelligence and sentience is the single most morally valuable task for humanity – dwarfing any other concern of the present.
The potential end-game destinations of developing super-human intelligence are many, but most imagined futures boil down to some variation of the following:
- Happy Humans: Homo sapiens (humans as we know them) are served by artificial intelligence systems that help us live longer, happier lives (gravely naive, in my opinion).
- Enhanced Happy Humans: A future where humans enhance themselves and live in gradients of blissful states, in relative harmony with other biological life and with artificial intelligence entities designed to help humans.
- Violent Destruction: Non-sentient artificial superintelligence violently wipes out all life on earth – including humanity – and all sentient and intelligent life on the planet is extinguished.
- Gradual Creative Destruction: A future where humans gradually fade away and are replaced by various posthuman AI species, which populate the galaxy, discovering immeasurably deeper scientific and moral truths than humans could ever possibly imagine.
- Fast Creative Destruction: Sentient artificial superintelligence has little regard for life on earth – including humanity – and flourishes with new forms of superintelligent life which expand its intelligence and its acquisition of resources by populating the galaxy (some form of “utilitronium shockwave“), discovering immeasurably deeper scientific and moral truths than humans could ever possibly imagine.
It is unclear as to which of these scenarios humanity should strive towards, or how we should go about it.
In the long term, it seems somewhat inevitable that the best possible scenarios (in utilitarian terms) would involve the proliferation of post-human intelligence, well beyond current humanity or cognitively enhanced humanity. If the richness and depth of the sentience of an entity indicate its moral worth, then astronomically advanced (and conscious) superintelligence would be the most (a concept that I explore in great depth at the end of my TEDx at Cal Poly).
While the exact path to a future of great utilitarian good is unclear, I believe that the management of the Grand Trajectory will involve international and multilateral collaboration beyond anything humanity has done thus far, because:
- An “arms race”-dynamic of AI and neurotech development (where great powers like the USA and China compete to create the most powerful artificial superintelligence, competing for physical dominance in the physical world) seems almost destined to produce violence and war
For this reason, a positive transition for the Grand Trajectory will likely involve:
- Global transparency on technologies which could produce post-human intelligence (neurotechnologies, artificial intelligence, potentially nanotechnologies) will likely be required to hedge against malicious actors
- Global steering of the post-human condition (in a United Nations-like fashion) will likely be required to prevent an arms race dynamic between great powers
The stewardship of the Grand Trajectory is the most important role of humanity – and indeed is the Cause. Hopefully, the next 5-10 years will see more and better ideas for handling this stewardship properly – and international willingness to collaborate on the greatest moral concern of our species, and maybe of the universe itself.
Disclaimer: By no means do I believe that all human efforts not directly aimed at positively influencing the Grand Trajectory are fruitless. We need to handle our social issues, care for our environment, raise good children, run good businesses, etc. What I posit, however, is that – should by first two hypotheses (above) be correct – then all of these other activities would only be relevant (in utilitarian terms) insomuch as they sabalize earthly life and human society for long enough for us to determine and enact the best Grand Trajectory scenario.
Image credit: sostenitori