The Short Argument for Cosmism

Much of the current AGI dialogue takes place under the umbrella of an unquestioned anthropocentric perspective. We can often take for granted that:

  • “Aligning” AGI implies honing it’s possible goals down eternally to human goals and values.
  • A “good outcome” for AGI means “good” from the vantage-point of humanity (and not posthuman intelligences themselves).

The purpose of this very brief article is to make the case that – by our own moral intuitions – cosmism is a better moral position to hold than anthropocentrism. In other words, I’ll argue in this essay that we can and should both (a) respect and work for the betterment of the lives of hominids, and (b) consciously work towards greater and higher expansions of life itself.

Anthropocentric vs Cosmic Life Perspective

Here’s a set of suppositions that you’ll probably agree with:

  • 1a. It would be devastating if all human life perished
  • 1b. It would be more devastating if all life (including human) perished
  • 1c. It would be even more devastating if all earth-life perished so completely that life on earth could never emerge again

Another few suppositions that you’ll probably agree with if you read them in order:

  • 2a. It would have been tragic if single-celled life never bloomed into higher intelligence and capability (i.e. “potentia”), up to animals
  • 2b. It would also have been tragic if early mammals never evolved large enough brains to arrive at humans
  • 2c. It would be even more tragic if that “blooming” of more potentia stopped at merely human level, and never advanced further

If you agree with the statements above, then you already understand on some level that the flame (life itself) is valuable, and that its rich expansion should be seen as a higher aim than the attempt at eternal preservation of any one particular torch (species).

One can believe both that (a) we should be very careful how we augment human minds and build new AGI minds, because not all of them will be sustainable and expansive ways of carrying on the flame of life (a la Bostrom’s Disneyland with no children), and (b) that it is impossible to deny that life’s survival requires it’s ability to take new forms and new powers / potentia, and that there are potentia posthuman forms of life very much deserving of moral consideration, and very much better suited to keep life’s flame burning into the multiverse.

It follows that:

  • Humanity should do its best not to perish, and especially to not let all known life come to an end (through nuclear war, or an unworthy successor AGI).
  • Humanity should do it’s best to ensure the survival and flourishing of humanity – including technological enhancements and mind uploading.
  • Humanity should encourage the continued blooming of potentia, mostly through artificial general intelligence, but possibly through biotech and neurotics.
  • Humanity must actively balance technological development by avoiding destruction (possibly through global coordination to avoid a reckless AGI arms race), but still moving towards greater potentia.

FAQ:

This article was intentionally brief – the bulk of my ideas have been laid out in much longer essays. I’ll bring up may of the common questions or objections I hear in my conversations both online and offline, and address them here. Ping me on X if you want clarity on any of this.

  • “Do you think we should rush to augment human minds and build post-human intelligences?”

No, I very much do not. I believe that it behooves us very much to discern if what we are building will in fact be a net boon to potentia, and not a purely destructive force.

From the United Nations to the OECD to INTERPOL and elsewhere, I have advocated that some level of intergovernmental coordination in order to prevent a reckless arms race of these technologies. I suspect that rushing into brain augmentation would yield tremendous conflict, and that most AGIs are unlikely to be friendly to humanity (or even to be very good stewards of expanding potentia beyond man).

  • “Do you think it’s okay that humanity just “dies off” to make way for AI?”

For well over a decade I’ve been thinking about best-case long-term scenarios for individual instantiations of human consciousness (read: Epitome of Freedom, As Much as We Can Hope For). Finding these pockets of paradise in a posthuman future will probably not be easy, and this is one of many reasons why I believe we should proceed with caution on AGI and brain-computer interface.

At the same time, I believe that humanity should look squarely at the fact that eventually we will likely fade away, and that the “steering of the future” (if volition exists at all) will be handled by vastly more agentic and capable entities. I believe acceptance of our eventual attenuation is a net good and will permit us to consider better human and posthuman futures.

Header image credit: ScienceFocus