Much of the current AGI dialogue takes place under the umbrella of an unquestioned anthropocentric perspective. We can often take for granted that:
“Aligning” AGI implies honing it’s possible goals down eternally to human goals and values.
A “good outcome” for AGI means “good” from the vantage-point of humanity (and not posthuman intelligences themselves).
The purpose of this very brief article is to make the case that – by our own moral intuitions – cosmism is a better moral position to hold than anthropocentrism. In other words, I’ll argue in this essay that we can and should both (a) respect and work for the betterment of the lives of hominids, and (b) consciously work towards greater and higher expansions of life itself.
Here’s a set of suppositions that you’ll probably agree with:
1a. It would be devastating if all human life perished
1b. It would be more devastating if all life (including human) perished
1c. It would be even more devastating if all earth-life perished so completely that life on earth could never emerge again
Another few suppositions that you’ll probably agree with if you read them in order:
2a. It would have been tragic if single-celled life never bloomed into higher intelligence and capability (i.e. “potentia”), up to animals
2b. It would also have been tragic if early mammals never evolved large enough brains to arrive at humans
2c. It would be even more tragic if that “blooming” of more potentia stopped at merely human level, and never advanced further
If you agree with the statements above, then you already understand on some level that the flame (life itself) is valuable, and that its rich expansion should be seen as a higher aim than the attempt at eternal preservation of any one particular torch (species).
One can believe both that (a) we should be very careful how we augment human minds and build new AGI minds, because not all of them will be sustainable and expansive ways of carrying on the flame of life (a la Bostrom’s Disneyland with no children), and (b) that it is impossible to deny that life’s survival requires it’s ability to take new forms and new powers / potentia, and that there are potentia posthuman forms of life very much deserving of moral consideration, and very much better suited to keep life’s flame burning into the multiverse.
It follows that:
Humanity should do its best not to perish, and especially to not let all known life come to an end (through nuclear war, or an unworthy successor AGI).
Humanity should do it’s best to ensure the survival and flourishing of humanity – including technological enhancements and mind uploading.
Humanity should encourage the continued blooming of potentia, mostly through artificial general intelligence, but possibly through biotech and neurotics.
This article was intentionally brief – the bulk of my ideas have been laid out in much longer essays. I’ll bring up may of the common questions or objections I hear in my conversations both online and offline, and address them here. Ping me on X if you want clarity on any of this.
“Do you think we should rush to augment human minds and build post-human intelligences?”
No, I very much do not. I believe that it behooves us very much to discern if what we are building will in fact be a net boon to potentia, and not a purely destructive force.
“Do you think it’s okay that humanity just “dies off” to make way for AI?”
For well over a decade I’ve been thinking about best-case long-term scenarios for individual instantiations of human consciousness (read: Epitome of Freedom, As Much as We Can Hope For). Finding these pockets of paradise in a posthuman future will probably not be easy, and this is one of many reasons why I believe we should proceed with caution on AGI and brain-computer interface.
At the same time, I believe that humanity should look squarely at the fact that eventually we will likely fade away, and that the “steering of the future” (if volition exists at all) will be handled by vastly more agentic and capable entities. I believe acceptance of our eventual attenuation is a net good and will permit us to consider better human and posthuman futures.
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Ideals Have Taken Us Here Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?…
Last week, I was fortunate enough to catch up with George Mason University Professor, Doctor Robin Hanson, one of the bloggers I admire most in the realm of intelligence and…
In the coming decades ahead, we’ll likely augment our minds and explore not only a different kind of “human experience”, we’ll likely explore the further reaches of sentience and intelligence…
I don’t watch fiction, and I don’t read fiction, almost as a rule. While I respect it as a medium, and consider it valuable in fleshing out future scenarios that…