Reflection on John Harris’s “Enhancement Are a Moral Obligation”
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
Opening Note: This essay will serve not just as a reply to Torres, but to the many other thinkers, current and future, who will decide (either through their own readings, or through rumors of spread by someone who has read a handful of my tweets) that my purpose is the destruction of mankind or something like that.
I suspect these accusitions will rarely come from anyone who has read my full essays, such as Worthy Successor or Stewarding the Flame. But if people do take issue with my ideas, and hurl at me reasoning, and not merely insults or attacks, I will always be willing to reply. Hopefully in a measured and respectful way.
..
Émile Torres recently put together an article titled The Growing Specter of Silicon Valley Pro-Extinctionism (Part 1), with the subheading:
The digital eugenicist Daniel Faggella argues that humanity should be replaced by a “worthy successor” in the form of AGI — his view is comically absurd and profoundly dangerous.
Ultimately, the most important near-term aim of my cause is to foster good-faith dialogue about cosmic moral aspirations (i.e. the future trajectory of the great process-of-life of which we are part).
Torres’ intro alone breaks from good faith immediately with accusations and insulting labels that I think are mean and unwarranted – but on Twitter on occasion, Torres has been civil, and has gone back-and-forth with me not as a monstrous aspiring murderer, but as a man who has lived, come to conclusions, and spoken my own perceived truths – which is what I would hope for (see: If You Have to Kill Me).
But Torres does more than lay out accusations and unfair interpretations – some of what he writes about are perfectly valid points and ideas which totally warrant discussion.
About topics related to my cause (and hopefully on other topics) I aim to be a courteous foe. Ultimately, all of my ideas are anchored in skepticism (see: I am a Neuron and a Moron), which that great skeptic said was entirely defined by a continuing to seek – rather than a close and assumed position of having arrived at – the “truth.”
Below, I’ll draw ideas and snippets directly from Torres’ essay, and aim to address his points:
First, Ralph Waldo Emerson is my spirit animal, and the greatest conduit to inevitable posthumanism in my life, by an order of magnitude beyond Bostrom / Kurzwiel (see: Spires of Form). Circles is the single most formative written work in the molding of my worldview.
I am not of Silicon Valley and am not primarily swayed by Silicon Valley. When I lived there (2015-2018), literally no one cared about my essays and ideas. I am born and raised in New England (originally, an obscure 6000-person town in souther Rhode Island), and I live now, quite intentionally, near Corcord, MA.
Second, the cosmic moral aspirations I speak of are in some way “pointed at” by thousands of thinkers before me. There are many modern models of psychological and moral development that “trend towards the cosmic,” to overt concerns beyond humanity, and event beyond human conception. I’ve written about these models – and their ancient philosophical and religious correlates in Cosmic Moral Aspirations:
I think this is a mean and unfair description of what I am saying.
I am saying this:
The intentional conflation of “he wants this” with “he sees this as a necessary consideration given the fact that we are likely not to exist in our current form for very long and MUST choose whether to attenuate or transform” is disingenuous and unfair.
For over a decade I have advocated for international coordination to stop the AGI arms race (see: Unite or Fight, SDGs for Strong AI). I’ve been involved with the United Nations and OECD for over seven years with this as my aim.
To insulting “Dan just wants to race to the robots so we all die right away!” take is absolutely unwarranted if this discussion is to be civil. I am easily in the top 0.0001% of humans in terms of exerting effort, money, and time – over a decade – to working on encouraging a stop to the AGI arms race.
In the long-term, do I think it is best for life to bloom beyond man, as man has bloomed beyond the nematode?
Absolutely, and unapologetically.
But my approach to this implies a very conservative and gradual research into the nature of sentience and autopoiesis (expanding potentia), and over time, almost certainly an initial period of merger.
Bengio, Singer, and others might not agree with me on all things, but at least agree that humanity is not and should not be the eternal peak of moral value and volition until the heat death of the universe. Some of us care about the great process-of-life in addition to hominids. Some of us (Singer, Sutton, Lord Martin Rees, myself) care – ultimately – more about the process-of-life continuing than we do about freezing hominid-ness and striving for the impossible eternal hominid kingdom.
An “extinctionist” would have death as their purpose.
My purpose is life. Not one eternally frozen form of life, but the continuation of the great process-of-life, in the long term. I also believe that if entities with 100000x more potentia than humans exist, they will conceive of better and higher goals than I can conceive of – so I claim no certainty about my goals. I am an axiological cosmist.
Full quote from Torres:
The key idea is that the torch doesn’t matter — what’s important is the flame. And since humans are just one kind of torch that can carry the flame, we ultimately don’t matter and, as such, should be discarded in the future.
Torres certainly did get a grip on my terminology, which I really do appreciate.
This phrase is basically a correct distinction about my long-term cosmic moral position.
Here’s a really crucial distinction which, in my opinion, Torres hasn’t emphasized enough in this article: I genuinely don’t believe that any forms (torches) can persist forever. Heraclitus’s river is flowing a lot faster now than when Heraclitus himself was alive.
I do not wish the the forces of transformation and attenuation were pressing on us so soon, and so fast. But I suspect they are. And I suspect that in the very long term, the way the flame survives is not by meticulously keeping alive and happy each torch, but my finding new torches and blazing into new space to expand its powers.
At some point we don’t have a place in that picture, but the descendants we create or merge into might – and the flame has a chance to blaze up and out, and it had to blaze up and out from the nematode to arrive at man (see: Blooming vs Servitude).
Even people without any opinions on AGI might presume that after a long-enough time horizon, genetically modified or evolved humans would diverge and vary so drastically as to not leave behind anything appreciably human. If said entities had more of those rich qualities that we value in humans (and maybe many new and amazing qualities as yet undiscovered), many people would consider this, too, to be a win. This is no monstrous opinion.
Here are listed some of my takes from my (admittedly draft-mode) “Worthy Successor list,” from the original Worthy Successor article.
After mentioning some items from my list – including the possibility of a Worthy Successor expanding into the multiverse to pursue goals beyond all possible human comprehension – Torres states:
These are good examples of utopian thinking: wild, fantastical claims that conveniently ignore the messy details of what such a future would actually look like.
To be clear, I have never once considered the expansion of the flame to be anything but messy. I admit frankly that it probably excludes the possibility of human persistence, and I have argued (with Michael Levin and others) that there doesn’t seem to be any appreciable end to the complexity and “struggle” that is survival. I’m also not convinced that (a) AGI will emerge in the near-term (there are many other forces that should make us think about human transformation even if AGI is far off, by the way), or (b) that if it will be conjured, it will have these wonderful Worthy Successor qualities.
In fact the Worthy Successor article is mostly a warning that we have no idea how to arrive at a “good” posthuman intelligence, so we need to not build superintelligence in the interim. This didn’t seem squarely addressed in this section of Torres’s article (though I don’t think it was neglected to mislead people), so I simply wanted to highlight that.
After this, Torres gets into some very interesting points about space colonization, some of which I haven’t heard worded before:
Multiple points to be made here:
So I actually don’t think I’m in much disagreement with Torres on this particular point.
This is among my favorite part of Torres’s article:
I very much agree here.
Before the nuclear bomb, and especially before industrialization, “extinction” of mankind or most of Earth-life would have been very rare outside of maybe some kind of asteroid strike or super-volcano or something.
Now, we get into some points that Torres and I disagree on:
Could the correlation (and its underlying causation) be any clearer: more technology equals more existential risk. That’s been true in the past, without exception, and will continue to be true in the future, whether it’s us or a “superintelligent” AGI building the technology. If you want to increase the risk of a global catastrophe, then you should join Faggella’s team. Expanding “potentia” is not the best way to ensure that the “flame” persists into the far future — it’s actually the worst.
Expanding potentia is always dangerous, as change is always dangerous. But life seems to somehow intuitively know that “not changing” isn’t an option, and is, ultimately, vastly more dangerous on the aggregate than attempts at growth.
I am outlandishly grateful that some various nucleotides bumped into each other enough to bubble up to me.
Yes, we humans have a chance to kill off earth-life, and I think that’s scary as hell.
But also, if we knew for a fact that a massive asteroid was going to strike and destroy earth in 200 years exactly, we might have a chance to occupy Mars or some kind of space-based mega-craft before that happened.
If the earth only had worms and beetles, it would be doomed.
But our greater potentia gives us a shot as survival.
If a Worthy Successor with 1000000x more potentia than humanity exists, it could escape into inner space, into outer space, and into dimensions and realms wholly unsuited for hominid (or even bio-life) survival.
Life’s tendency to bloom is a kind of inner wisdom that “being” safe is not possible, but that “becoming” is the only “safe” path.
I concur completely with Torres that there are many ways to “try to become” that would kill us. Hence my ardent work to stop the current AGI race. But rather than place “freeze” forever, and let Earth-life die as soon as an alien species arrives, an asteroid hits, or our sun expands, I advocate that eventually we must work on positive blooming.
I try to tackle some of this at in this video (start at 1:22, end at 3:18) below:
He then states:
As for “running out of resources,” I don’t understand why Faggella highlights this. We could always, you know, choose to live sustainably. This point is so obvious as to hardly be worth mentioning.
I suspect that Torres and I may have different opinions about what “sustainable” means:
This isn’t the definition I wish existed, but its the one I suspect is true, in spite of it being hard to swallow.
Torres continues:
I imagine a nematode saying to another nematode:
What do you think there is beyond crawling in this dirt, eating, and mating?
The other nematode replies:
Surely nothing at all. What would be the point of all that? Just expansion for expansion’s sake? Just new for new’s sake? What a waste, what a weird alien goal to have.
Yet, you and I sitting here probably feel pretty grateful that the following things evolved (i.e. bubbled up from potentia):
And yet, like the nematode, we consider what might be beyond our current level of potentia (see the table below) and we say “surely its just risk and stupidity, with no upside at all.” I’ve written on this at length in The Council of the Apes.
Expanding potentia not only gives life the long-term chance to stay alive (i.e. to keep the flame of life burning, even if individual torches go out), it also discovers higher goods than could be conceived by other lower-potentia entities.
If all the splendor of love, technology, scientific discovery, etc all emerged by bubbling up from nematode to humanity, what kinds of new goods have yet to be unlocked?
Consciousness itself seems to have bubbled up from potentia’s expansion. Think about it! This amazing “field of awareness” that seems to be the bedrock of ethics for many of us (without it, pain and pleasure wouldn’t exist!)… it just emerged! What an amazing treasure chest we ran into with that!
The position of axiological cosmism is that: There are more treasure chests of value – beyond even consciousness – and higher-potentia entities could reach them.
Even Peter Singer, the king of the utilitarians, agrees that we should continue to explore the opening of new treasure chests (here’s the interview where he says as much).
The article goes on:
This leads to a philosophical point: Faggella has a profoundly off-putting view of ethics and value, though it’s basically the same as that found among totalist utilitarians.
My priority is neither to be “off-putting” or “comforting.” My priority is to say what seems most true, and ardently explore it, without dogmatic attachment to it.
Axiological cosmism is similar in some ways to classical utilitarianism, but its not the same, and I should make the distinction clear:
He identifies value with the “flame,”7 and creatures like us as the expendable/fungible “torches” that merely carry the flame. This is exactly the way that utilitarians think about people: we’re the expendable/fungible containers, vessels, or substrates of “value.” You and I have absolutely zero intrinsic value. We matter only insofar as we bring this thing called “value” into the universe. It’s a very capitalistic way of thinking about morality, which is why I’ve described the approach as “ethics as a branch of economics.”
Torres continues:
One sees this in Faggella’s approach. Like the utilitarians, he gets things exactly reversed: we matter for the sake of value, on his view, rather than value mattering for the sake of us.
It doesn’t seem fair to blame me for the state of nature.
The universe seems rather clearly not to favor us, and humans themselves seem rather capable of horrible things to other humans and to other forms of life.
The most comforting idea in the world is the idea that somehow, hominids-as-they are will or should be the locus of both moral value and volition until the heat death of the universe. But I suspect this is Pedestal Cope – and I don’t cut through it to be “mean” or “upsetting,” but to force us to consider the great process-of-life of which we are part.
Faggella claims to care about “life” and “consciousness” yet doesn’t care about any of the things that are alive and conscious.
This sounds like a “gotcha” only when digested with two invisible assumptions operating in the background:
I simply ask us to look squarely at uncomfortable truths (which might not be true, lets investigate them), and ask what to do about it. And I suspect “what to do” involves talking ardently about what a positive transformation looks like. Fortunately – even with a bit of accusing and anger baked in – that seems to be what Torres is doing here with his essay, and so I’m happy to reply and hash things out as best I can.
He goes on:
For folks like Faggella, the correct response is always and only maximization. He wants more more more of the “flame,” spread to every corner of the universe. But there’s a vast array of alternative responses to value such as cherishing, treasuring, loving, caring for, respecting, protecting, savoring, preserving, adoring, appreciating, and so on. I’m not opposed to sometimes maximizing value, but it’s profoundly wrongheaded (indeed, it’s insanity) to think that this is the only correct response to what’s valued.
There’s a lot here that I agree with, frankly.
It seems likely to me that humans playing board games and watching comedy shows might actually add to the net tonnage of potentia on earth. Without it, maybe our inventions and progress wouldn’t have been possible. Humans can’t “just work to maximize power” without sleep, play, love, etc. I have always been rather clear (and “always” meaning, as long as I’ve been writing about these things, since 2011 or so) that positive qualia (not just potentia) seems to matter (see: Steering Sentience).
That said, I do not believe that life can simply “treasure” and “cherish” alone. I suspect some of it might be good and maybe even often necessary in the grand and complex mix of life, of drives, etc. But potentia ought expand, and seems to do so rather unceasingly.
I believe that an entity with 100000x more potentia than myself would conceive of goals to pursue and things to do and ways of valuing that are completely beyond my comprehension (and the comprehension of Torres, and all other humans), just as our goals and modes of valuing are incomprehensibly beyond the conception of sea snails.
Perhaps this vast state space of possible minds will do things like cherishing, like caring, like savoring… but perhaps it’ll do things vastly beyond all of those terms. I’m no advocate for “hard-pressed driving forward aggressively at all times,” I know not what the future actions or goals of a Worthy Successor might be. But I know they will be changing, and dealing with a changing world, as the great process-of-life has always done.
Opening with this line doesn’t seem to be in good faith:
Perhaps the most egregious part of Faggella’s digital eugenics…
Why is fostering this dialogue so morally wrong?
I get that from an anthropocentric perspective, “life blooming beyond man as man is beyond the sea snail” is the same as “all life is extinguished.” But the attribution of malice is wholly unwarranted. I won’t return an insult with an insult, but I will state when I am being misrepresented and I’ll state that I don’t particularly appreciate it.
The article continues:
There is no way that everyone around the world, or even a majority of people, will ever agree that we should replace ourselves with AGI.
I suspect that through a combination the forces of attenuation and transformation pressing on us, and through a gradual de-sacred-ization of the “real” world (see: Bending), humans will probably begin to lean in the direction of transhumanism, merger, brain-uploading, or passing the baton.
This will happen when we are forced to fact these forces. I am merely trying to conjure the dialogue ahead of what I think is the inevitable. Facing that as an act of malice isn’t warranted.
That leaves things like violence, coercion, mass murder, and the violation of human rights as the only plausible ways for AGI to usurp us — a fact that Faggella seems not to have thought much about.
I’m wholly unwilling to accept this accusation.
This is like saying “He may not say he wants to coerce everyone to die… but by blogging about these ideas… he’s basically killing all of us right now!”
What is going on here?
I’m fostering a dialogue. I’m advocating constantly to stop the AGI arms race. I’m positing ideas that I think should be posited as we approach the tremendous changes that AGI and brain-computer interface are likely to bring.
This is taboo cognition in the extreme:
It goes on:
It’s hard to imagine a more impoverished, anti-human, and violent philosophy than what Faggella is peddling.
Hard to imagine, huh?
This is disrespectful beyond what I deserve. Taboo cognition again :(.
The “worthy successor” movement basically instructs us to give up on this world by commanding us to create an entirely new world populated by wildly alien and inhuman beings — as Faggella writes: “(in the future) the highest locus of moral value and volition should be alien, inhuman.”
Alas:
If change is a constant, than asking “How can we achieve positive transformation, instead of mere attenuation?” is no sin.
It tells Silicon Valley dwellers exactly what they want to hear: “You’re not only excused from making this world a better place, but you’re a morally better person for channeling all your wealth and energy into building AGI instead.”
Nonsense.
I am not from Silicon Valley. I serve not Silicon Valley.
Inside United Nations headquarters at the General Assembly on September 25th of this year, and at a my own side events to the UNGA, I will be beating the drum, primarily, of global coordination to stop the AGI arms race.
It’s completely disingenuous to say this is what “Silicon Valley wants to hear.” 90% of the folks I reach out to at the big labs won’t even jump on my podcast because they know I’m heavily critical of the current drive to create an AGI we don’t understand.
The accusations go on:
What Faggella presents, in every important respect, is a vehement rejection of our world and a rejection of Earthly life, in all its magnificent glory. If he were to get his way, everything that makes life worthwhile would be destroyed.
For some, life should freeze eternally, and only this is moral good. For others, all things change, and we ask how to make the most of those changes.
For some, moral good exists only when experienced by entities with opposable thumbs. For others, more and new value might be experienced by higher minds, as humanity experiences humor and romantic love and the nematodes do not.
For some, “magnificent glory” lies in the world as it is. For others, it lies in what the world could become – and indeed must become in an ever changing dynamic system of a universe.
I am, unapologetically, in the latter camp.
Overall, Torres has, on some occasions, engaged in good faith with my ideas. Its clear he’s read some of it – more than most, frankly. The most I can hope for with anyone who seems offended by my ideas is to lay out my reasoning, any try to get to the crux of the philosophical argument – below the insults or offense. If I learn in the process, that’s a win.
I will take the insults without returning a single one. One day I suspect the prudent will also extol.
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
The past 2 or 3 weeks I’ve been digging into a lot of material I’ve found in the blog and article sections of the IEET website – and recently I…
Many months ago, Peter Voss (who we’ve interviewed previously on the Emerj podcast) came through San Francisco and we sat down for a coffee at my favorite place in Hayes…
I spoke recently at a United Nations and INTERPOL event in Singapore, on the topic of artificial intelligence use-cases in law enforcement. One of the only other Americans at the…
Many months ago, Peter Voss came through San Francisco and we sat down for a coffee at my favorite place in Hayes Valley, Arlequin. We chatted a bit of business…
During my most recent UN speaking engagement in Shanghai, I wasn’t able to access Google (or my email), and was left with only a few books and the tabs I…