Digitized and Digested
“Digitized and digested” is the future scenario which I consider to be a likely long-term future for humanity, should we survive the initial growing pains of the singularity. By long-term, I…
For the sake of this essay I won’t be talking about morality as some abstract human access to “the good,” but simply as a set of heuristics of principles for making decision, for valuing what to do or what not to do.
Let’s start by defining our little made-up term:
Moral Singularity: A point in time where the values of transhuman or strong AI systems evolve so quickly and radically that it is impossible to predict what the most powerful systems will value – making it impossible to ensure the continued adherence to any previous goal.
It is my belief that a moral singularity will not result in a single, core set of values and principles being somehow “found” in the universe.
Rather, given what I believe to be the inherent arbitrariness and contextual nature of morality (not that it should be ignored, but that it should be understood to be malleable and re-considerable), it seems obvious that transhumanism and AGI will birth a kind of splintered, ever-increasing moral complexity which will be both impossible to predict, and nearly impossible for humanity to survive.
My argument can be summarized as follows:
As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:
Very few people would disagree with the two statements above. It follows then, that:
In other words:
As AI augments itself, learns, grows, develops – it will not arrive at a “singularity” in the form of a single set of inviolable moral ideas – but rather – it will explore the possibility-space of subjective moral ideas with such speed and veracity that it may change wildly minute-by-minute, and somewhere in those moral oscillations, humanity is near-destined to be of little importance and destroyed, absorbed, or ignored.
‘Lol, AGI won’t just maximize paperclips, it’ll question its goals. Humans will be fine!’
— Daniel ‘No, Brother’ Faggella (@danfaggella) November 24, 2023
No, brother.
What makes you think that a fooming, expanding AGI that questions it’s goals will at all times magically arrive at the conclusion that ‘treating hominids well’ is a good idea?
It might be argued that people of different moral beliefs somehow coexist today, and that this should be evidence that various future intelligences will co-exist tomorrow, too.
I see a number of issues with the presumption of peaceful coexistence:
An explosion of variants of intelligence will more likely result in an explosion of variants in moralities – and the value of human life is unfortunately far from secure in a volatile era of expanding post-human intelligences.
"Dan, you're PRESUMING that these AI risks will happen. You can't see the future!"
— Daniel ‘No, Brother’ Faggella (@danfaggella) December 27, 2023
No, brother.
My risk position is based in my ignorance:
We have no idea what AGI would do, so we ought not suspect it to ALWAYS act in the relatively small # of ways that support hominids.
The best that we can do as humans is to hedge against the destructive force of this moral singularity. If AGI and cognitive enhancements become viable, this phenomena is – I believe – likely to occur.
One possible option would be to begin with the best possible moral “framework” for AGI – which is a massively challenging problem in and of itself. The best we can hope for there is a starting point, after which the AGI will expand vastly beyond it.
Another option would be for humans to escape into a mind uploaded personal virtual space (which I’ve referred to as the “epitome of freedom“), where they can explore the far reaches of mental variations and post-human conscious experiences without the ability to physically harm one another.
In either case, I believe human beings need to accept that creating post-human intelligence will imply post-human ways of valuing things, and that this may imply very little value for humanity itself. There is not “AGI will definitely love and care for humanity” scenario.
This doesn’t mean we shouldn’t build AGI.
I would argue that building a worthy successor is the most (maybe the only) important thing that we can possibly do as a species.
We must accept, then, that handing that baton up to AGI will almost certainly imply not just the end of our reign, but the end of our continued existence as a species.
This implies that we should be careful to launch AGI in a way that would build such a worthy successor (see link above), because after we launch, it’s unlikely that a second chance will come. This would imply some kind of global governance, or at least shared vision, around AGI – which is the topic for another article (read: Unite or Fight – The International Governance of AGI).
“Digitized and digested” is the future scenario which I consider to be a likely long-term future for humanity, should we survive the initial growing pains of the singularity. By long-term, I…
Many months ago, Peter Voss came through San Francisco and we sat down for a coffee at my favorite place in Hayes Valley, Arlequin. We chatted a bit of business…
[et_pb_section bb_built=”1″ admin_label=”section” _builder_version=”3.0.47″][et_pb_row admin_label=”row” _builder_version=”3.0.47″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.15″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”] “The Wisest Cricket” is a term used in jest (and absurdism) to refer to the human…
[et_pb_section bb_built=”1″ admin_label=”section” _builder_version=”3.0.47″][et_pb_row admin_label=”row” _builder_version=”3.0.48″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.15″ background_size=”initial” background_position=”top_left” background_repeat=”repeat”] The “Monkey Suit” is a term used in jest (and absurdism) to refer to the human…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Alexander the Great had enough time in his 32 years to accomplish many objectives. Defeating the Persian empire, conquering the Hellenic world and large swaths of India, and so on. Succession…