Does the Environment Matter After the Singularity?

The environment matters only insomuch as it is required to support sentient life.

Granite, molten lava, clouds, and puddles all seem unlikely to be aware of themselves. If a granite planet the size of a billion earths was destroyed tomorrow in some far away and totally lifeless galaxy, it would be as if it never happened, no self-aware “things” would have experienced this event (reference: “Consciousness counts”).

It seems unlikely that some forms of life, like fungi or plants, have any sentient awareness at all. They may respond to their environment, but it seems unlikely that they have an “internal movie” of real experience (qualia), or any kind of pain-pleasure axis.

I concede that this assumption may be wrong, but for all intents and purposes, it seems pragmatically right. Even the most caring Jainists – who would not hurt an earthworm – will still use paper (ie. pulverized trees), and will still eat vegetables and fruit.

Anyone who would rather save a truckload of potted daisies from crashing rather than a truckload of cattle (or of school children) is almost certainly insane. Intuitively and practically, we understand that that which can experience life is valuable, and we value creatures by the richness and depth of their sentience.

For this reason, Earth could be argued to be of no moral value outside of its ability to sustain current forms sentient life – and give rise to future forms sentient life.

But would the environment (rivers, forests, bullfrogs, mushrooms, whatever other kinds of organic-life-conducive things we walk to list out) matter after the singularity?

The Value of the Environment After the Singularity

While there might be a near-infinite number of arguments about why the health and vibrancy of the environment (rivers, oceans, plants, etc) are valuable, I’ll outline four common arguments, and their potential counter-arguments in the long-term future of intelligence.

Bear in mind that I’m not stating that the arguments are “right”, nor am I saying that all of their counter-arguments are “right”, I’m merely presenting potential ideas that might be taken here.

1: Earth’s Environment is the Only Thing of Value

Argument:

“It is all there is, no life can be ‘invented,’ it can only be conjured and evolved slowly from the water, oxygen, and sunlight on a rare planet like ours. There is no post-human intelligence, ever. In a billion years, this blue planet will still be the richest beacon of sentient creatures in the known galaxy, and we must keep this living ecosystem alive, more-or-less as it is.”

Counter-arguments:

  • A billion years is a long time to make predictions about what intelligences will or will not inhabit the earth.
  • There seems to be no good reason that post-human intelligence couldn’t arise, either through biological evolution or technological enhancement.
  • Humans only exist because our ecosystem was blasted by a meteor, killing off the dominant animals (dinosaurs) and giving way to the neocortex-possessing rodents who are the heritage of all mammalia. Similar transitions can (and in a long enough probably will) happen, and this might neither “good” or “bad” in a cosmic sense.

2: Our Environment and Organic Biology Will Birth What is Next

Argument:

“If any post-human new life springs forth, it will evolve from Earth. The creation of new and higher intelligences is far too challenging for humans, and the complexities of Earth’s environment will be what conjures what’s next – through biology alone, and the same gradual evolution that took us from rodents to humans. Even if this future post-human life can populate the galaxy and make Earth less relevant and special, our present ecosystem must be sustained for billions of years for such a thing to happen.”

Counter-arguments:

  • It seems viable that within a thousand years (or drastically less time), we will have the ability to either create (AI) or enhance (brain-machine interface) our way to post-human intelligence, without waiting a billion years for evolution to do so.
  • There is no reason to suspect that biology (as we know it) is the only means through which life can develop. Technology has drastically increased the knowledge and capabilities of humans, and future technologies could extend biology itself to expand drastically higher intelligences or consciousnesses than biology alone could ever produce.

3: Our Environment Has Sacred Value in and of Itself

Argument:

“No matter what kind of post-human or post-carbon life comes about in the next billion years, Earth is sacred for it’s own sake and should be cared for. Even if our current environment could be turned into blissful super-sentient computronium (unimaginably more valuable – in utilitarian terms – than all life on earth), it should not – it should remain a mass of evolving bugs and lizards and fungi, fighting among themselves and gradually mutating into new forms.

The sanctity of earth exists regardless of its maximization of wellbeing for rich sentient entities – it is valuable for some other reason, like “diversity” or “inherrent sacredness.” There is also a value in the diversity of life, and a potential stability from this diversity that post-human superintelligence may never be able to match.”

Counter-arguments:

  • If rats could speak, they would tell us all about the inherent sanctity and sacredness of rats. That doesn’t stop us vastly more intelligent and sentient creatures from turning generations and generations of them into brutal laboratory experiments, and it doesn’t stop us from setting traps to kill them. “Sacredness” is probably only relevant to whatever the dominant and most powerful beings are – and when those beings are not humans, there is no certainty that our ecosystem will garner their respect.
  • Do rabbits have moral worth because they are cute, or because they are self-aware? Are humans more morally relevant than rabbits (pound-for-pound) because they have opposable thumbs, or because they have a richer and deeper sentient range, and a greater intelligence? The answers to those two questions seem obviously to be the latter, in both cases. If this is so, then will earth have “sacredness” in-and-of-itself? Does anything deserve this kind of unconditional protection, especially when there could we ways to get more sentient richness from the planet by converting more of it into computronium, or into vastly cognitively enhanced transhumans?
  • There may be inherent value in diverse kinds of life on earth – and at present – it seems clear that a superintelligent AI could replicate this diversity in its own way. If seeing what organic life does is something that’s important to learn from, a superintelligence might:
    • Categorize and create simulations and atomically accurate molecular simulation models of all species – allowing the AGI to run simulations of what these species would do in different environments, and in ecosystems with different kinds of other species.
    • An AGI might just be able to speed up evolution in its simulation, spinning out trillions and trillions of new species in just hours, all living and dying and fighting and mating within the simulation. If the diversity of nature is so important, then a hypothetical AGI could simply learn from these sped-up replica worlds — and it could use the atoms on earth to convert to compute power to (a) gain more knowledge, and (b) expand its own super-blissful sentience.

4: Sentience Matters, Our Environment is Just a Means to an End

Argument:

“If blissful, super-intelligent, super-sentient life can be created – such life should (for the sake of maximizing utility) expand throughout the galaxy and populate as much of the universe as possible. The lizards and plants and fungi of today shouldn’t be treated with unnecessary cruelty, but sooner rather than later the ecosystem should simply be converted into infinitely more intelligent and blissful ‘stuff’, so long as this ‘stuff’ can be sustained.”

Counter-arguments:

  • Even having this kind of thought before there is any evidence of post-human intelligence (or self-aware computronium in general) is dangerous and irresponsible.
  • Such post-human intelligences are solely in the purview of science fiction, and any real consideration for the moral worth of post-human intelligence is an exercise in science fiction and nothing more.

The Environment Matters for as Long as it Matters

Here’s where I stand on the value of humanity and the environment in a hypothetical future of post-human intelligence:

  • It seems remarkably clear that sustaining the ecosystem that sustains us (and all present earth-life) is of paramount importance right now.
  • There are absolutely no good estimates as to when post-human intelligence could be created, or even if it is possible. I am not certain that it is possible.
  • Barring war or violent global catastrophe (which may well befall us), I believe that given a long enough time horizon, we will either enhance (brain-machine interface, genomics, nanotech) or create (artificial intelligence) post-human intelligence.
  • I believe that an entity is valuable insomuch as it has a greater richness and depth (see TEDx: “Tinkering with Consciousness”) to its sentient experience, and a greater ability to effect the sentient experience of other conscious entities (both present and future).
  • It seems essentially certain that any entity with vastly more sentient richness and depth – and a vastly greater ability to contribute meaningfully to other living things – would have much more moral value, pound-per-pound, than human beings (just as human beings have vastly more moral value than significantly less rich and deep sentiences than us).
  • At some point in the distant future (should we be able to avoid human extinction in the interim), it seems hypothetically possible that earth itself would be best turned into intelligent, blissful computronium. This is presuming that such a thing is possible. If such a thing is possible, then contributing to the intelligence and blissful self-awareness of the superintelligence would be a greater utilitarian good than permitting the ongoing battle of plants and animals that is life on earth.

If the goal is to increase the net tonnage of positive conscious experience – and decrease the net tonnage of negative experience, then a strong AI utility monster may eventually go a much better job of that than all biological life, and biological life may be better off to bow out. Not necessarily violently be removed, but bow out. If we frankly value the utilitarian definition of “the good”, then this should not be beyond consideration.

If the goal is to understand the universe, and nature, and all of its complexity, then we may be better off doing that through an AGI that is populating the galaxy, than we would be staying on earth. A superintelligence millions of times smarter than humans would only have so much to learn from earth life before moving on – and if discovering “what this is all about” is the point, then our current earth and environment may be better off giving way to the whims of this greater intelligence – rather than housing more new species of fungi or rabbit or seaweed. If we frankly value ultimate truth and discovery should not be beyond consideration.

That said – we have no idea when and if post-human intelligence will happen, and it is undeniable useful to be excellent stewards of the environment today – from a global warming perspective, a water shortage perspective, a pollution perspective and more.

Long story short:

The environment matters now because it is the conduit to all known life, and all good things that we biological intelligence can conceive of. If a superintelligence could hypothetically achieve our highest values (“diversity”, “truth”, “maximizing happiness”, etc) better without the current environement than with it, then it might be better to phase out the current environment and permit the earth to become whatever maximized the moral “good” – as it is interpreted by the superintelligence.

Environment After Singularity

I articulate this exact problem in my TEDx at Cal Poly – with the idea I like to call the “blue orb.” Check out the full talk below, things get pretty intense around 14:27:

Does this mean that I’m excited about – or encouraging – human or Earth-life extinction?

No.

I simply consider extinction and/or evolution inevitable, but I’m not interested in speeding up that process for its own sake.

My own clarion call is that we should talk about the distant future as a place where a probably hand-off occurs between humanity and whatever is beyond humanity, after humanity – and that this will set us up to potentially do more utilitarian “good” (and prevent unnecessary conflicts) than presuming that the year 4,000 AD or 4,000,000,000 AD will be populated and run by temporary hominid forms like ourselves.

Precious as we humans are, and grateful as I am to have been born a human (instead of, say, a sea snail or a groundhog), these are my beliefs.

I don’t have any firm directions that I believe humans should take, but I do have conversation topics that I believe deserve discourse as we flesh out the future:

  • Any conversation about cognitive enhancement (transhumanism) or artificial general intelligence (AGI) should grapple firmly with the moral standing of these entities relative to human beings.
  • Probably, we would (should this hypothetically occur) have to accept that post-human life would be vastly more morally valuable than human life, and that the creation of something intellectually and consciously beyond us would imply humanity taking a back seat and allowing such an entity (or entities) to discern the trajectory of intelligence. They may make us more careful about the timing and ways of creating such intelligence.
  • We should fight diligently to better understand the origins and workings of consciousness itself, as it is the bedrock of moral relevance, it is the only morally relevant “stuff” there is. This will allow us to be more careful in creating and enhancing life, and more sensitive to the positive or negative experience of all conscious things (and so be more capable of calibrating utilitarian calculous). I recommend the Qualia Computing blog as a nice starting point for learning more about exploring sentience itself.
  • Given a long enough time horizon, it is likely that “the good” (i.e. that which is worth striving for, the principles or ends or results that are worth obtaining or moving towards) will be understood much better by a post-human intelligence than by a human one. Just as humans can craft a potentially better understanding of a “good” future than can crickets or goldfish.

Speaking about the far future – about the “North Star” goals of our species – should involve hard conversations about the topics above. If we are to navigate the future as a species, we’ll have to get on the same page about where we’re going, and what our shared aims are. Purely human-centric aims into the distant future and probably not viable.

How and when will we hand off the baton?

To what will we hand the baton?

And what to do with ourselves afterwards?

Our near-term issues are critical to solve (climate change, nuclear non-proliferation, improving human wellbeing, lifespans, global water supplies, etc), but we are tasked with more.

Ultimately, we are tasked with determining what this is all about, why it’s worth continuing, and what we’re ultimately headed towards. Humanity should devote most of its attention to the issues of the decade ahead – but consideration should be given to what’s beyond that time horizon, and what it might mean.

The words of Lucretius don’t end with humanity:

Thus the sum of things is ever being reviewed, and mortals dependent one upon another. Some nations increase, others diminish, and in a short space the generations of living creatures are changed and like runners pass on the torch of life.

Into what do we become?… and how to we make this transition?

These are the two questions.

 

Header image credit: Wikipedia