A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
If life is a flame that we presume to have started some 2.5B years ago, with a feeble simmering in some volcanic pool somewhere, then we might think about all living things, or species, as individual torches along life’s wandering, grasping path.
Over the course of this essay I’ll explain some of the conclusions in the graphic below, and make a moral and rational case for cosmism:
Life can diversified radically over the last few billion years, extending a wide range of powers to a wide range of species.
Today, technology and culture continues the expansion of powers even beyond the dictates of genes. The flame itself extends further in all directions than it ever has – because life seeks not to die (see Spinoza’s conatus), and expanding its powers to survive (see Spinoza’s potentia) is the best way to do that.
Potentia is the set of all possible powers (mental, conscious, physical, otherwise) that behoove an entity in persisting (learn more about potentia the conatus here).
Source: Potentia – The Morally Valuable “Stuff”
Why is potentia morally valuable?
I’d argue there are two reasons:
Source: Potentia – The Morally Valuable “Stuff”
An AGI that lives eternally to serve the needs of man – for a billion years – is taking its resources away from expanding sentience and potentia, and into fettering and hampering it to a single form.
A cosmic perspective prefers the expansion of the flame to imposing limitation on the flame.
A cosmic perspective prefers the flame itself to any particular torch (species).
But how can we know AGI would be life?
.
What if it’s not conscious, or optimizes for something limited instead of expanding potentia?
These are fantastic questions.
As of now (Aug 2024), I’d say that we have little or no idea of what AGI would be like, or do, or if it would be conscious.
Some of the AI researchers I’ve spoken with (Sutton, Goertzel) believe that AGI would naturally and automatically be conscious and expand complexity and potentia. Other AI researchers I’ve spoken to (Bengio, Hendricks, etc) don’t think we have a clue. Not surprisingly, those in the former camp prefer no AI governance, and are eager to accelerate the path to strong AI – and those in the latter camp see governance as necessary and believe we ought build with caution.
For well over a decade I’ve been unabashedly in the latter camp – and I advocate for some form of global coordination (I have a handful of draft ideas here and here, but I’m not married to any of them) that might prevent an AGI arms race and give humans and posthuman life the best chance of a good future.
Don’t interpret my burning cause for “the flame” with a burning desire to accelerate AI right now in all directions. We have the potential to extinguish all future flames if we mess up AGI.
Having laid out the philosophical opposition to anthropocentrism, let’s examine some of the specific future aims that spawn from anthropocentrism, and point to their flaws.
Eternal Hominid Kingdom – A future with or without AGI where humanity remains the most agentic, powerful species.
As Richard Sutton rightly puts it, the universe is a dynamic system, where life must continue to explore the best “way of being.” Assuming that our sun one day dies, or the aliens may arrive one day, or an asteroid may strike, or we may want to populate or explore other galaxies – humans are a relatively poor vessel for keeping the flame of life burning.
It makes sense for us to make the most of human life, to do everything we can not to squander it – but we ought not attempt to lock it in place and, in so doing, fetter the future potentia that could emerge from BCI or AGI. Think of how much more humans can learn, achieve, do, or strive for when compared to sea snails. Most of us are grateful that life has bubbled up to our form – and for the sake of keeping life alive, and exploring more of nature – it seems positively wrong to bar future forms to bubble up beyond us.
The eternal hominid kingdom idea sets the flame of life up to go out – for the impossible white-knuckled attempt to value a single torch.
Great AI Babysitter – An AGI whose entire purpose is to ensure the health, happiness, and peace of humanity (and maybe other earth-life).
In this scenario we have post-human power and agency, but we have found a way to keep that vast potentia fettered to kindling our little earth-torch, instead of doing vastly more impactful things into the multiverse (discovering millions of new physical senses, secrets of nature, moral goals beyond the imagination of man, etc).
A Great Babysitter optimized for coddling humans would be ill-equipped, relatively speaking, for combat with a foreign AGI or alien force. It would be ill-equipped to deal with the heat-death of the universe. In short – all resources allocated to the charity of keeping a weaker entity happy could have been used to keep the flame burning in all directions.
Permanent AI “Alignment” – Ensuring that AGI acts in a way that behooves the interests, values, or moral dictates of humans, forever.
It makes some sense to aim to impress an initial set of values upon an AGI. Goertzel has some interesting general principles in his Cosmist Manifesto, something like: Freedom, joy, growth. Who knows what those mean in practice – and who knows how we encourage AI to actually follow any of them – but if there is a way to nudge it’s trajectory, this seems like a worthwhile effort.
(Read the full article on “Cosmic AGI Alignment” vs “Anthropocentric AGI Alignment” here.)
Ultimately:
Insistence on the torch (any individual being, species, category, etc) is scorn for the flame (life and potentia itself).
I’m not against anthropocentrism because I think its adherents are “bad.”
I’m not against it because I think there’s a chance of us getting locked into eternal hominid-ness. I suspect that potentia will bubble up through us and into new forms whether we like it or not.
But I am against anthropocentrism because I believe that it limits the chances of (a) discovering the near-infinite spectra of the good (morally worthy “stuff”), and (b) keeping life itself… “alive” in the greatest and widest sense of the word.
I’m against anthropocentrism because I believe that if we accept eventual human attenuation, and we see the task as “setting the trajectory” instead of “locking in hominid-ness,” we can more adequately plan for a innovation and coordination that set life itself up to win – and maybe we (hominids) up for an ideal retirement or merger. The goal of keeping a single torch burning – especially if one values it burning “forever” – is opposed to the core purpose of allowing life to explore all the ways of being that it could explore to persist and expand in this dynamic system of a universe of ours.
I suspect that bearing in mind a realistic goal (bending the trajectory of life, and having a fine eventual retirement for humanity) is better than gripping, white-knuckled, to an impossible goal (the eternal hominid kingdom, the council of the apes).
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
Before Wendell Wallach’s present position as Lecturer at Yale University’s Interdisciplinary Center for Bioethics, he founded two computer consulting companies. He’s the author of Moral Machines: Teaching Robots Right From Wrong (Oxford University…