Present Ideals Inspiring Future Ideals – and Vice-Versa
1 – Does “Making the Most” of Our Biological Lives Provide Insight to How We Might or Could “Make the Most” of Our Post-Human Lives? My greatest fascination in the…
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few of us, relationships imbue the most meaning in our lives and bring us our greatest highs.
They may also be said to give us our greatest lows – but other “negative” influences such as prolonged physical pain, a lack of autonomy in life and work, thoughts of worry and sorrow, and countless other factors rack our minds with experiences of discomfort and pain.
Morality and ethics still are tremendously complex, of course, but at least in the context of human life we might imagine what a “good” world would be like to some significant extent – even if in a very basic and broad sense (IE: there probably wouldn’t be much murder or war, and we can imagine that disease and melancholy would not run rampant, running water would exist, medicine to cure diseases, etc).
Let us presume that consciousness exists, and that in a post-human future, the brains of human beings are merged with computers in some way, shape, or form.
If a computer of nearly limitless potential to calculate and build upon itself should to aim to be “fulfilled,” what would that even imply or look like? What would “super intelligent super bliss” be like?
Just as a cricket cannot possibly imagine the rich range of pleasures that a human can experience (from great literature, to admiring a sunset, to enjoying a joke, to warm nostalgic memories, etc), it seems rational to suspect that humans are incapable of imagining higher or richer forms of pleasure than we ourselves are privy to.
For example we can hardly imagine that “super intelligent super bliss” would require food, for it not only can likely produce the pleasurable sensations of food without actually eating (be replicating this sensation to trillions of times the pleasure a human might experience), but it also likely wouldn’t require the sense of “taste” in order to attain whatever “fulfillment” composed of it in the first place.
I’ve argued that researchers in the field of positive psychology (the field in which I was formally trained in grad school at UPENN) should consider the extrapolation of “fulfillment” to kinds of experience beyond what we know now – but at least at the time of this writing, most transhuman experience writing comes from the sci-fi and transhumanist communities, not from academic psychology. I hope this will change, as “fulfillment” will become a much more boundless term in the coming 20-40 years.
As humans, we have our five senses, and a great deal of our world is understood in terms therein. We also have our own biologically programmed criterion of “feeling.” A constructed non-human entity may be bound by none of these factors.
In fact, it may have thousands and thousands of “senses,” of its own construction – allowing it to detect and gain feedback from minute aspects of reality that as human beings we could not even comprehend, never mind experience.
In addition, this kind of entity (be it modified human, or something other than human) would have no criterion for “feeling”, “experience”, or “fulfillment.” Presumably, these criterion have been programmed into it – and presuming it has the power – it is able to adjust these criterion on its own.
With that being said, we might presume that an entity of this sort who sought “fulfillment” or positive “feeling” / “experience” would be able to continuously imbue itself with whatever the highest ideals of this experience might be. If this entity sought more and more of this experience – as it might be said we do as humans – it may not require much other than keeping itself running, and building more “nodes” or parts of itself with which to experience more and more of this positive experience (see the concept of “utility monster“).
We might also presume that such an entity might not want or need “feeling” at all. It would require innumerable modes of feedback to itself, but it might be presumed that “feeling” would be altogether irrelevant for its functioning and its goals (whatever these goals might be). While emotion likely serves as an extension of human cognition, a machine might not only lack emotion, but subjective experience itself. Depending on the goals of the entity, such a subjective experience may not be necessary.
We might also presume that “feeling” might only hold relevancy for conscious beings, and that it might be said that such an entity, though it have objectives, senses, knowledge, and amazing computational and creative capacities far beyond anything man could imagine – might not be “awake” and “aware” in the literal sense of being alive.
This seems to be a disturbing idea given the fact that it seems to be consciousness that counts in the moral weight of an entity (i.e. animals and humans weight on a moral scale, rocks and dirt do not), it isn’t impossible to imagine that a future superintelligence may do without subjective experience, and achieve it’s goals (whether that be populating the galaxy or protecting earth from asteroids, or whatever else) with no self-awareness or “senses” in the conscious way that we perceive them. Just a cold machine, with all the ability to act and do and learn – but without any subjective awareness at all.
Food for thought. It seems as thought the applications of philosophy and ethics have had no more an important time in our lives than now as we imagine our actual ideals – and the furtherance of human potential in general.
In political philosophy, ideals and principles might have taken hundreds of years to come to fruition, but in the construction of and framing of trans-humanity and what is called the “singularity,” we will actively set the tone for a future that we will likely get to see within our own lifetime.
In my opinion, there is nothing more fascinating than the scope of fulfillment and conscious potentiality – and as a race there seems no more important conversations of our era than that of molding what consciousness will become and how we will all be effected. Personally, I consider this to be the most pressing moral concern imaginable (see “the cause“).
1 – Does “Making the Most” of Our Biological Lives Provide Insight to How We Might or Could “Make the Most” of Our Post-Human Lives? My greatest fascination in the…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…