A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Ask a politician, businessperson, or friendly neighbor what their hopes are for the future and the odds are, they’ll tell you they hope that the world is a better place.
Maybe they mean an end to hunger, or better education for all, or more religious freedom (or less religious freedom). Whatever the combination of factors, each person has some idea of what would make more of the world experience, more pleasure, or maybe just experience less suffering.
In the coming 20-40 years, the means of “moving the needle” on the totality of global pleasure and suffering will change drastically with the advent of brain-machine interface and strong artificial intelligence.
No longer will we be asking only about political or economic or educational reform or changes… we’ll be asking about augmenting and expanding consciousness itself.
Sooner or later — as a species — we’ll have to decide what direction to steer consciousness and intelligence itself.
In this article, I’ll explore eight possible directions that sentience might be steered towards, and what we should do about it.
[Author’s note from April 2024: Since writing this essay, I have encapsulated the idea of Scenario 7 and 8 (below) succinctly with the idea of Potentia – which I’ve since written an entirely separate essay on. Some of this essay feels dated, but the scenarios laid out below may nonetheless be useful / interesting to explore for readers with an posthuman directions.]
I’ll keep this section brief, but it’ll be necessary to be on the same page before diving in.
My argument in this article rests on the following supposition: That consciousness experience, or “qualia”, is the bedrock of moral relevance. The net amount of suffering and the net amount of pleasure on earth is a balance of the highest moral relevance.
The first few minutes of my 2014 TEDx explores how this utilitarian perspective is an almost intuitive moral barometer for most people.
This is a core tenet of utilitarianism. While I’m not a card-carrying utilitarian advocate, I believe that — on a global scale — it is a proper barometer for what it means to “make the world a better place.”
My argument in this article also rests on a thought experiment:
At 2:09 into my TEDx at Cal Poly, I walk through this thought experiment quickly. I think visuals are often needed to make this thought experiment click:
There are a variety of arguments as to why suffering could never be eliminated (because it would decrease motivation, because happiness can’t exist without suffering), which I will not explore in-depth in this article, as they’re already covered in-depth in a past essay (Unquestioned Assumptions About Happiness and Motivation).
With that thought experiment and supposition out of the way, we can dive into the potential directions of future sentience.
The list of eight options below is not intended to be a complete list. Rather, it is intended to be a reasonably likely list of options that people might discuss in the near term. I somewhat unabashedly share what I believe to be the most likely realized “best direction”, but I also welcome criticism and alternatives (dan [at] emerj [dot] com).
Onward to our options:
There’s a reason I didn’t go to art school, and the images below aren’t intended to be pretty. They are intended to be useful in illustrating the point.
The size of the total circles below represents all sentient life on earth (I’m not going to say “in the universe” just now, because we have much less volitional control over distant stars than we do on our planet).
Think about all the little orbs sitting on top of the heads of all sentient things on a planet (see the video above). All those little blue and red and purple balls get rolled into the circles below. I have them represented in half red and half blue to indicate the relative presence of both (I’m not indicating that 50% of qualia is positive and/or 50% negative, I’m using it as a representative image of where the sentient balance might be today).
For the sake of this article, the word pleasure is being used as a broad term to indicate all preferable conscious experiences, from finishing a great oil painting, to eating ice cream, to reading a good book, to feeling peace of mind, to having sex, and so on – depending one’s personal preferences, cultural background, etc. Suffering, likewise, is used to represent all gradients of unpleasant conscious states, from being dumped in a relationship, to stubbing your toe, to being eaten alive by hyenas.
The circle to the right represents a relative change in the amount of sentient experience in the world (in the increased or decreased or unchanged size of the circle), or in the relative change in the balance of pleasure and suffering in the world (in the changed degrees of red or blue in the circle).
The presence of the right-facing arrow indicates a movement from where we are now in our pleasure/suffering ratio, to one of the eight different directions that we’ll be exploring.
Description: Humanity and earth life remain more-or-less the same. Relatively the same amount of pleasure and suffering.
Reason: Some believe that suffering cannot be removed, and that suffering and pleasure must exist in equal measure in some kind of Yin and Yang balance. This “natural” balance of conscious experience is often echoed by notions that the “natural” state of affairs would be to go no farther than humanity. Drastic cognitive enhancement and strong AI should be banned, or at lease never permitted to create a more powerful new kind of agent.
Description: Humanity and earth-life experience less suffering, and relatively more pleasure, but will not eliminate suffering, nor will it expand vastly into new and more robust forms of sentience.
Reason: Some people believe that suffering cannot be eliminated. It seems challenging, even in the best-case scenario of reengineering the biology of living things for happiness (say, at a genetic level), that one could expect to eventually, animal-by-animal, eliminate suffering altogether. But without factory farming, with less pollution, with better governance, etc – humans and animals might share aggregately more pleasure and less suffering.
Description: No biological creatures on earth experience suffering, or only a very small amount thereof.
Reason: Some people believe that life beyond biological earth life (strong AI, cognitive enhancement, mind uploading) will always be impossible. A subset of these people believe that while sentience and intelligence cannot be expanded, suffering can be phased out, and bliss expanded, across sentient species – potentially by genetic means. David Pearce is a philosopher who mostly takes this approach.
Description: Humans die off, or gradually diminish their numbers (and possibly the number of other species) in order to make earth life more sustainable.
Reason: It is possible that many antinatalists support this position, or many environmentalists who do not believe that earth can realistically sustain the size of the human population.
Description: Humans leverage technology (genomics, brain-machine interface, AI) to create some kind of post-human sentient forms with vastly greater ability to experience pleasure and pain (and presumably, vastly greater ability to understand, to create, etc).
(For the time being, I’m not going to count a Mars and Moon colony among the options that would drastically increase sentience. The increase in the size of the right-hand circle above is meant to represent a leap upwards in terms of sentient richness and depth, not simply the proliferation of more and more earth life.)
Reason: Some people might believe that eventually, post-human intelligence can and should be created, but that such an entity would not be able to escape the gestalt of pain and pleasure existing together. I argue that this gestalt may not be true at all.
Description: Humanity creates super-sentient superintelligence to populate earth, and potentially the galaxy or the universe. This has been called the utilitronium shockwave. This kind of bliss and intelligence expansion in non-biological substrates might also happen via drastic cognitive enhancement of mind-uploading that results in a kind of grand consciousness (a scenario that, as pointed out by Qualia Computing, might be more palatable to people who don’t want to lose their “self” by going extinct to make way for the expansion of a hulking computer that isn’t them.)
Reason: Believing positive qualia to be the only or the highest good, some people will wish to simply create and expand as much of that core, good “stuff” as possible, presumably by creating a super-blissful superintelligence (whether bootstrapped initially on human cyborgs or constructed entirely as artificial intelligence).
Description: Humanity decides against expanding intelligence or building super-blissful and/or superintelligent AI, and chooses instead to explore what “good” they hope to achieve (beyond just optimizing for more pleasure and less pain in an absolute and total sense).
Reason: It could be argued (and I feel this way myself sometimes) that humanity can only do so much more exploring when it comes to “the good”, and that we will need a gradual increase in intelligence (AI, cognitive enhancement) in order to think or act (potentially) at a higher level of morality than we can in our present human state. This option might involve such a gradual investigation and expansion of intelligence, with an eye to finding those new modes of moral valuing, rather than adhering to utilitarian optimization alone.
Description: Humans construct a super-sentient superintelligence, but they aim to build the machine to seek out new forms of “the good”, rather than simply perpetuating maximum hedonic, utilitarian qualia-optimization forever.
Reason: Some people might argue that while the utilitarian good should be optimized, a superintelligence should be able to conceive of even higher and better ways of morally valuing and acting – beyond utilitarian bounds – and that this is the highest value that a higher intelligence could strive for.
If I had to choose today, I’d probably choose an option somewhere between option 7 and option 8 (as seen above).
I believe ardently that some kind of expansion of intelligence is inevitable given a long enough time horizon. And while I believe that the utilitarian good (the net amount of pain and pleasure) is the best proxy for “the good” that humanity has, ultimately we’ll need a higher intelligence to conceive of higher and higher moral modes of valuing and acting.
In the totality of the universe, it seems presumptive to assume that the human notion of utilitarian good would still be considered “highest” by entities as high above humans in intelligence as humans are above crickets.
Even if we concede that “goodness” remains always subjective and varied, a lens to interpret one’s own advantage in a given environment – and even if there is no “ultimate” good to ever be found, even with a trillion times more intelligence and creativity than humans have now – it seems likely that much more refined and aggregated better modes of valuing would be found if a higher intelligence was able to seek them out.
I’ve written my complete thoughts on this matter in an essay called AGI and Finding the Good is Better Than Doing Good.
In the coming 20-30 years, humanity is likely to construct a global steering and transparency coalition for intelligence itself. I have argued that avoiding this solidarity of nations would likely lead to great conflict.
This coalition or alliance will likely be under the guise of “A Safe Future for Humanity” – but by the time this group forms, it will be self-evident that something, in some direction, is blooming and beyond homo sapiens, that AGI and brain-machine interface are beginning to transform into a potential post-human (likely: more than human) intelligence and sentience.
Humanity will have to answer the question: “What is after us? What are we turning into?”
The answer, at least in the short term, might be: “We are humans and nothing more, all else is too dangerous.” (This is option 1 in my hypothetical 8 options above).
It is extremely unlikely that “option 1” will remain the agreed-upon direction for sentience that humans choose to strive towards, because:
If we agree that conscious experience (qualia) is the morally relevant “stuff” in the universe (insofar as we can tell), then I suspect that it would behoove us, as a species, to consider where we are steering qualia and intelligence itself.
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…