A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
The environment matters only insomuch as it is required to support sentient life.
Granite, molten lava, clouds, and puddles all seem unlikely to be aware of themselves. If a granite planet the size of a billion earths was destroyed tomorrow in some far away and totally lifeless galaxy, it would be as if it never happened, no self-aware “things” would have experienced this event (reference: “Consciousness counts”).
It seems unlikely that some forms of life, like fungi or plants, have any sentient awareness at all. They may respond to their environment, but it seems unlikely that they have an “internal movie” of real experience (qualia), or any kind of pain-pleasure axis.
I concede that this assumption may be wrong, but for all intents and purposes, it seems pragmatically right. Even the most caring Jainists – who would not hurt an earthworm – will still use paper (ie. pulverized trees), and will still eat vegetables and fruit.
Anyone who would rather save a truckload of potted daisies from crashing rather than a truckload of cattle (or of school children) is almost certainly insane. Intuitively and practically, we understand that that which can experience life is valuable, and we value creatures by the richness and depth of their sentience.
For this reason, Earth could be argued to be of no moral value outside of its ability to sustain current forms sentient life – and give rise to future forms sentient life.
But would the environment (rivers, forests, bullfrogs, mushrooms, whatever other kinds of organic-life-conducive things we walk to list out) matter after the singularity?
While there might be a near-infinite number of arguments about why the health and vibrancy of the environment (rivers, oceans, plants, etc) are valuable, I’ll outline four common arguments, and their potential counter-arguments in the long-term future of intelligence.
Bear in mind that I’m not stating that the arguments are “right”, nor am I saying that all of their counter-arguments are “right”, I’m merely presenting potential ideas that might be taken here.
Argument:
“It is all there is, no life can be ‘invented,’ it can only be conjured and evolved slowly from the water, oxygen, and sunlight on a rare planet like ours. There is no post-human intelligence, ever. In a billion years, this blue planet will still be the richest beacon of sentient creatures in the known galaxy, and we must keep this living ecosystem alive, more-or-less as it is.”
Counter-arguments:
Argument:
“If any post-human new life springs forth, it will evolve from Earth. The creation of new and higher intelligences is far too challenging for humans, and the complexities of Earth’s environment will be what conjures what’s next – through biology alone, and the same gradual evolution that took us from rodents to humans. Even if this future post-human life can populate the galaxy and make Earth less relevant and special, our present ecosystem must be sustained for billions of years for such a thing to happen.”
Counter-arguments:
Argument:
“No matter what kind of post-human or post-carbon life comes about in the next billion years, Earth is sacred for it’s own sake and should be cared for. Even if our current environment could be turned into blissful super-sentient computronium (unimaginably more valuable – in utilitarian terms – than all life on earth), it should not – it should remain a mass of evolving bugs and lizards and fungi, fighting among themselves and gradually mutating into new forms.
The sanctity of earth exists regardless of its maximization of wellbeing for rich sentient entities – it is valuable for some other reason, like “diversity” or “inherrent sacredness.” There is also a value in the diversity of life, and a potential stability from this diversity that post-human superintelligence may never be able to match.”
Counter-arguments:
Argument:
“If blissful, super-intelligent, super-sentient life can be created – such life should (for the sake of maximizing utility) expand throughout the galaxy and populate as much of the universe as possible. The lizards and plants and fungi of today shouldn’t be treated with unnecessary cruelty, but sooner rather than later the ecosystem should simply be converted into infinitely more intelligent and blissful ‘stuff’, so long as this ‘stuff’ can be sustained.”
Counter-arguments:
Here’s where I stand on the value of humanity and the environment in a hypothetical future of post-human intelligence:
If the goal is to increase the net tonnage of positive conscious experience – and decrease the net tonnage of negative experience, then a strong AI utility monster may eventually go a much better job of that than all biological life, and biological life may be better off to bow out. Not necessarily violently be removed, but bow out. If we frankly value the utilitarian definition of “the good”, then this should not be beyond consideration.
If the goal is to understand the universe, and nature, and all of its complexity, then we may be better off doing that through an AGI that is populating the galaxy, than we would be staying on earth. A superintelligence millions of times smarter than humans would only have so much to learn from earth life before moving on – and if discovering “what this is all about” is the point, then our current earth and environment may be better off giving way to the whims of this greater intelligence – rather than housing more new species of fungi or rabbit or seaweed. If we frankly value ultimate truth and discovery should not be beyond consideration.
That said – we have no idea when and if post-human intelligence will happen, and it is undeniable useful to be excellent stewards of the environment today – from a global warming perspective, a water shortage perspective, a pollution perspective and more.
Long story short:
The environment matters now because it is the conduit to all known life, and all good things that we biological intelligence can conceive of. If a superintelligence could hypothetically achieve our highest values (“diversity”, “truth”, “maximizing happiness”, etc) better without the current environement than with it, then it might be better to phase out the current environment and permit the earth to become whatever maximized the moral “good” – as it is interpreted by the superintelligence.
I articulate this exact problem in my TEDx at Cal Poly – with the idea I like to call the “blue orb.” Check out the full talk below, things get pretty intense around 14:27:
Does this mean that I’m excited about – or encouraging – human or Earth-life extinction?
No.
I simply consider extinction and/or evolution inevitable, but I’m not interested in speeding up that process for its own sake.
My own clarion call is that we should talk about the distant future as a place where a probably hand-off occurs between humanity and whatever is beyond humanity, after humanity – and that this will set us up to potentially do more utilitarian “good” (and prevent unnecessary conflicts) than presuming that the year 4,000 AD or 4,000,000,000 AD will be populated and run by temporary hominid forms like ourselves.
Precious as we humans are, and grateful as I am to have been born a human (instead of, say, a sea snail or a groundhog), these are my beliefs.
I don’t have any firm directions that I believe humans should take, but I do have conversation topics that I believe deserve discourse as we flesh out the future:
Speaking about the far future – about the “North Star” goals of our species – should involve hard conversations about the topics above. If we are to navigate the future as a species, we’ll have to get on the same page about where we’re going, and what our shared aims are. Purely human-centric aims into the distant future and probably not viable.
How and when will we hand off the baton?
To what will we hand the baton?
And what to do with ourselves afterwards?
Our near-term issues are critical to solve (climate change, nuclear non-proliferation, improving human wellbeing, lifespans, global water supplies, etc), but we are tasked with more.
Ultimately, we are tasked with determining what this is all about, why it’s worth continuing, and what we’re ultimately headed towards. Humanity should devote most of its attention to the issues of the decade ahead – but consideration should be given to what’s beyond that time horizon, and what it might mean.
The words of Lucretius don’t end with humanity:
Thus the sum of things is ever being reviewed, and mortals dependent one upon another. Some nations increase, others diminish, and in a short space the generations of living creatures are changed and like runners pass on the torch of life.
Into what do we become?… and how to we make this transition?
These are the two questions.
Header image credit: Wikipedia
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…