A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Many renown technology pundits, entrepreneurs, and inventors have posited a similar vision of an artificial intelligence-enabled future:
I posit that once something like UBI is implemented, people will ask for more than wealth, they will ask directy for happiness – through pills, fulfilling virtual experiences, and eventually brain augmentation.
Overcoming of the need for humans to work (if we get there) is merely the beginning of a much greater trajectory that ultimately ends in a post-human transition, initially with cognitive enhancement, and eventually with full-blown mind-uploading to a drastically non-human, non-physical conscious existence.
It plays out as a relatively continuous path towards greater abundance and greater emphasis on human wellbeing.
An oversimplified history – and projected future – of the value of “wellbeing” or “happiness” in society might look like this:
If automation really did concentrate unimaginable wealth, then it does seem best that some fair and just means of distributing it would be best – even if it implies that most of humanity can serve little economically productive purpose. Whether universal basic income (UBI) or some other means, abundance should be shared and no massive portion of society should be left destitute.
That said, UBI or other means will not provide people with purpose or meaning – and indeed efforts to “give” people purpose and meaning cannot possibly give them what they’re after: Happiness.
Martin Seligman, the founder of positive psychology (the study of human wellbeing), has claimed that the goal of positive psychology should be to have 51% of the human population flourishing (read: fulfilled, eudaemonic) by 2051. I argue frankly that this cannot be done without altering the hardware and software that fetters us in pain and anxiety. This should be done as safely as possible, and with proper testing and iterations, but it’s required.*
A government that genuinely cares about the “net tonnage of human happiness” (a term from Martin Seligman that I rather like) of its people will move swiftly from providing for people’s needs, to augmenting their minds, and eventually to creating vast vistas of fulfillment beyond the human condition in some other substrate than the limited and hampered human mind.
The crux of the matter is that human beings aren’t happy animals. I’ve elaborated on this in great depth in my article about the biological impediments to human wellbeing. To quote Montaigne:
“Is it not a singular testimony of imperfection that we cannot establish our satisfaction in any one thing, and that even our own fancy and desire should deprive us of the power to choose what is most proper and useful for us? A very good proof of this is the great dispute that has ever been amongst the philosophers, of finding out man’s sovereign good, that continues yet, and will eternally continue, without solution or accord.”—Montaigne, Of a Saying of Caesar
Here’s how I suspect we’ll have to evolve beyond some kind of UBI scenario:
I consider Japan’s hikikomori phenomenon to be an overt precursor to this kind of future – as well as the video game obsessions of the youth in South Korea and the Netherlands. Once a civilization has achieved abundance, and happiness still isn’t there (given human mental hardware and software, there is no sustained happiness), what other option is there but escape?
We claw and scramble towards what we think will yeild our own wellbeing, and that road eventually leads beyond the human condition. In this way, human motives and drives will indeed tear us from humanity.
People may be happier when they are freer, but freedom and material wealth have done little to stave off depression and suicide in the developed world. An embarrassingly large part of our ranks pay for medication to improve their emotional state, despite the fact that we live like gods compared to our ancestors just two or three generations ago. People turn to the government to look out for their (the people’s) own best interest.
“Best interest” is a euphemism for happiness. End of story.
People want Universal Basic Happiness, not Universal Basic Income – the latter of which is a meager proxy for wellbeing, but not the emotional experience itself.
Eventually, governments will have to manage the lotus-eating portion of their population who demand sustained wellbeing without the hinderances of the human condition. UBI will – if enacted – showcase poignantly, that happiness can’t be found in our current form.
Grand questions loom:
Those questions will here be left unanswered – and I don’t have a dogmatic answer to any of them, they’re all valid questions.
What does seem clear, is that the demands on governments will not ultimately be to provide citizens with “wealth” or “meaning”, but with “wellbeing”, and that this will involve a potentially very dangerous transition to augmenting our brains themselves. No technological abundance can compensate for the hedonic treadmill. Only new mental substrates can.
**Disclaimer 1 – I have nothing against the human mind. I’m grateful to have one, as it is the only thing we know of that would permit me to express my thoughts this way. What I’m getting at in this part of the article is that the mind is – on the whole – exceedingly limited in its ability to understand the cosmos, and in its ability to sustain wellbeing (see: “hedonic treadmill”, or the above quote by Montaigne).
**Disclaimer 2 – I received my Master’s degree in positive psychology from UPENN, in Seligman’s program. I have a respect for the discipline and its general direction, and I hope that its researchers and proponents will think about the future of fulfillment, and the direction of fulfillment beyond the current human condition.
Header image credit: Artvalue
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…