A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In expressing my notion of the Worthy Successor, or any kind of idea about entities with more potentia and moral value that present-day humans, people sometimes suspect that Worthy Successor implies a preference for silicon over biology, and an eagerness to immediately move beyond humanity.
It would be impossible to read the full Worthy Successor article and assume such things, but Twitter discourse often doesn’t extend beyond the 280 character limit.
In this article I’ll clarify what kinds of pathway forward I advocate for (in terms of biology, AGI, and governance) using the analogy of the flame that I’ve used in my previous article The Flame, Not the Torch – Against Anthropocentrism. You’ll find that it is (of course) far from anthropocentric, but that it is very conservative in terms of how we aim to extend intelligence and potentia beyond humanity.
I’ll lay out the idea of stewarding the flame, a kind of guide to “what to do next” in the development and governance of AGI, neurotech, or other pathways to post humanism.
Stewarding the flame is nothing more than taking axiological cosmism seriously in our individual and (more importantly) collective decision-making.
I’ll open by laying out the goal, requirements, and strengths and weaknesses of the stewarding the flame idea in contrast with other philosophical positions:
Stewarding the flame doesn’t have a preference for silicon or carbon-based life – it simply asks “Which is best, given the existing alternatives, in this specific case, to preserve currently known value and to expand the flame of value and potentia forward into the future?”
It doesn’t have a favorite species, or favorite individual – thought it values those more who can help actively in the broader effort to expand the flame itself.
If love is valued, or positive qualia is valued, and for hundreds of years these traits were wholly impossible outside of biological substrates, it would work to expand such substrates to expand said value (which also expanding beyond biology for other, potentially higher kinds of value to pursue).
If AGI is almost certainly conscious and capable of expanding potentia, then if we aim to steward the flame, we should feel much more confident about releasing and expanding it even if positive outcomes can’t be entirely guaranteed for humanity (no such guarantees likely can or should exist).
We could think of Stewarding the Flame as a kind of decision-making criteria for axiological cosmism.
We might draw inspiration from Jeremy Bentham’s filicific (or utilitarian) calculus, and apply the same ideas to axiological cosmism.
Here is a succinct and lightly paraphrased way to frame the steps of utilitarian calculus:
List every foreseeable consequence, weight each by its probability times the resulting hedonic intensity/duration, then pick the act that maximizes aggregate utility.
This is best described as a consequence‑rating method, a systematic way to translate the moral rule “maximize overall happiness” into concrete choices.
Drawing on the format of utilitarian calculus, we might lay out the decision-making criteria of axiological cosmism (lets call it axiological calculus) this way:
List every foreseeable consequence, weight each by (1) its likelihood to preserve and optimize for known value, and (2) its potential to continue to unfold new dimensions of power and value, then pick the act that maximizes the long-term increase of power and value.
Boiling the comparison down to a simple table:
It goes without saying that putting either of these moral tenets in action isn’t easy.
For utilitarianism, its laughably hard to “crunch the numbers” on whether building an apartment building (or devoting one’s life to dentistry, or having children, or whatever else) will be a net good to the totality of current and future sentience life.
And yet utilitarian calculus would still help to guide the apartment building decision by looking at its impact on the people living in the city now, and on the people who may come into the city over time. It’s subjective and totally imperfect but might still be useful on some level.
For axiological cosmism, following the stewarding the flame path (i.e. “crunching the numbers” on “axiological calculus”) is just as laughably hard as utilitarian calculus in most cases.
Yet we can still draw some important conclusions from axiological calculus, especially when it comes to building and governing AGI and posthuman technologies (neurotech, possibly nanotech, etc).
For instance, using axiological calculus / applying stewarding the flame rigorously:
The axiological calculus, then, helps us guide our decisions around the trajectory of AGI and posthuman technology and its related governance.
People who learn about the Worthy Successor idea often assume a preference for compute over biology (false) and an eagerness to rush to AGI with no safety measures (false).
In this section I’ll briefly outline some tenets of what stewarding the flame looks like in action.
I’ve written before about What the Worthy Successor Is and Is Not, and and the tenets I lay out below for stewarding the flame flow pretty naturally from these:
(Note: These same examples are laid out in slightly different analogies in the essay The Business of Value, where you’ll find more infographics and tables the break down these tenets.)
Here are a handful of tenets for stewarding the flame:
Before creating new kinds of minds (AGI or BCI-augmented), humanity must deeply understand what constitutes moral value and ensure new vessels can actually carry it, rather than leaping blindly and risking irreparable loss.
Just as with Buffet’s Rule #1 of Investing, stewarding the flame would require first not risking current value in order to reach for new value.
Flame Analogy: We should make sure our current torch stays lit while we aim to pass the flame to new torches. We should never risk the entire fire going out in a reckless overreach to new torches.
Tenet of Stewarding the Flame: Study and secure the essence of value before expanding it to new forms. Keep preservation of essential value as an active process along with unraveling new forms of potentia and value.
When transformation is inevitable, humanity must not cling sentimentally to the human form, but wisely seek how to preserve and elevate value through the changes to come.
Flame Analogy: Ultimately value the flame, knowing that torches come and go, and that better torches emerge and must be lit.
Tenet of Stewarding the Flame: Be loyal to value itself, not to its current vessel.
In the face of unavoidable collapse, humanity must be willing to take bold, last-resort actions (such as AGI or radical augmentation) to preserve value, but only when all safer paths are truly closed.
Flame Analogy: Risk the flame itself to light new torches only when all your current torches are about to be extinguished and you have no other better option to keep the flame burning.
Tenet of Stewarding the Flame: Reserve desperate leaps for desperate times, aiming always at the survival of value.
If one adheres to axiological cosmism in action, then humanity’s role is not the preservation and wellbeing of humanity alone, or the biosphere in its entirety.
Rather, humanity’s role is to be a catalyst to (1) identify, preserve, and expand current moral value (importantly, consciousness and autopoiesis), and (2) opening up infinite new magazines of potentia and value into the multiverse.
This may include a hope and even a plan for a kind of ultimate retirement for humanity, but it would never advocate for the path of servitude, only for the path of blooming:
Humanity is a crucial torch on the path of the flame itself, a crucial conduit for the flame itself – if we don’t drop the ball.
Ultimately, the expansion of flame is the great cause, even if at some point this is at odds with the persistence of humans-as-they-are, or even anything appreciably human existing at all.
We would understand that clinging to an eternal hominid kingdom is not only impossible, but morally wrong, and that more than being part of “a species,” we humans are part of the greater project of life itself, blooming into the galaxy to persist, experience, understand (and do a variety of other things that humans can’t imagine, as sea snails cannot imagine love or astrophysics). We would reach the top of the pyramid of perspectives:
Taking this cosmic perspective will require us to move beyond individualistic and anthropocentric frames, which can only be done by accepting the eventual attenuation of all species, our own included – and embracing the positive trajectories that this could lead it.
Stewarding the flame is about making sure the intelligence trajectory (of brain-augmented humans, of AGI, etc) goes well. And this is very unlikely to happen if we don’t have a mechanism to build AGI intentionally with the values that we consider to be important. An arms race doesn’t allow for the optimization of anything other than economic or military advantage – and a golden mean of international governance would certainly be a priority if the flame of conscious autopoiesis itself is what we wanted to expand.
I’m not saying you should abide by stewarding the flame as a principle of how to live or what to strive for collectively at the dawn of AGI. But if you wanted to, this is what it would look like.
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
Before Wendell Wallach’s present position as Lecturer at Yale University’s Interdisciplinary Center for Bioethics, he founded two computer consulting companies. He’s the author of Moral Machines: Teaching Robots Right From Wrong (Oxford University…
1) Augmentation is nothing new Until recently, “augmented reality” seemed to only have a place in video games and science fiction movies. Though at the time of this writing, “AR”…