As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few of us, relationships imbue the most meaning in our lives and bring us our greatest highs.

They may also be said to give us our greatest lows – but other “negative” influences such as prolonged physical pain, a lack of autonomy in life and work, thoughts of worry and sorrow, and countless other factors rack our minds with experiences of discomfort and pain.

Morality and ethics still are tremendously complex, of course, but at least in the context of human life we might imagine what a “good” world would be like to some significant extent – even if in a very basic and broad sense (IE: there probably wouldn’t be much murder or war, and we can imagine that disease and melancholy would not run rampant, running water would exist, medicine to cure diseases, etc).

Let us presume that consciousness exists, and that in a post-human future, the brains of human beings are merged with computers in some way, shape, or form.

If a computer of nearly limitless potential to calculate and build upon itself should to aim to be “fulfilled,” what would that even imply or look like? What would “super intelligent super bliss” be like?

Just as a cricket cannot possibly imagine the rich range of pleasures that a human can experience (from great literature, to admiring a sunset, to enjoying a joke, to warm nostalgic memories, etc), it seems rational to suspect that humans are incapable of imagining higher or richer forms of pleasure than we ourselves are privy to.

For example we can hardly imagine that “super intelligent super bliss” would require food, for it not only can likely produce the pleasurable sensations of food without actually eating (be replicating this sensation to trillions of times the pleasure a human might experience), but it also likely wouldn’t require the sense of “taste” in order to attain whatever “fulfillment” composed of it in the first place.

I’ve argued that researchers in the field of positive psychology (the field in which I was formally trained in grad school at UPENN) should consider the extrapolation of “fulfillment” to kinds of experience beyond what we know now – but at least at the time of this writing, most transhuman experience writing comes from the sci-fi and transhumanist communities, not from academic psychology. I hope this will change, as “fulfillment” will become a much more boundless term in the coming 20-40 years.

Boundaries / Boundlessness in “Experience”

As humans, we have our five senses, and a great deal of our world is understood in terms therein. We also have our own biologically programmed criterion of “feeling.” A constructed non-human entity may be bound by none of these factors.

In fact, it may have thousands and thousands of “senses,” of its own construction – allowing it to detect and gain feedback from minute aspects of reality that as human beings we could not even comprehend, never mind experience.

In addition, this kind of entity (be it modified human, or something other than human) would have no criterion for “feeling”, “experience”, or “fulfillment.” Presumably, these criterion have been programmed into it – and presuming it has the power – it is able to adjust these criterion on its own.

With that being said, we might presume that an entity of this sort who sought “fulfillment” or positive “feeling” / “experience” would be able to continuously imbue itself with whatever the highest ideals of this experience might be. If this entity sought more and more of this experience – as it might be said we do as humans – it may not require much other than keeping itself running, and building more “nodes” or parts of itself with which to experience more and more of this positive experience (see the concept of “utility monster“).

The Potential Irrelevancy of “Experience” in an Entity

We might also presume that such an entity might not want or need “feeling” at all. It would require innumerable modes of feedback to itself, but it might be presumed that “feeling” would be altogether irrelevant for its functioning and its goals (whatever these goals might be). While emotion likely serves as an extension of human cognition, a machine might not only lack emotion, but subjective experience itself. Depending on the goals of the entity, such a subjective experience may not be necessary.

We might also presume that “feeling” might only hold relevancy for conscious beings, and that it might be said that such an entity, though it have objectives, senses, knowledge, and amazing computational and creative capacities far beyond anything man could imagine – might not be “awake” and “aware” in the literal sense of being alive.

This seems to be a disturbing idea given the fact that it seems to be consciousness that counts in the moral weight of an entity (i.e. animals and humans weight on a moral scale, rocks and dirt do not), it isn’t impossible to imagine that a future superintelligence may do without subjective experience, and achieve it’s goals (whether that be populating the galaxy or protecting earth from asteroids, or whatever else) with no self-awareness or “senses” in the conscious way that we perceive them. Just a cold machine, with all the ability to act and do and learn – but without any subjective awareness at all.

What This Might Mean for Us as Humans in a Transition to Transhumanity

  • As we merge with machines to whatever extent (and as we tinker with consciousness), it might be said that we could be capable – even in the early stages of this transition – to not only function at a higher level in terms of our thought and of our work, but that we might be capable of experiencing perpetual fulfillment at a completely inhuman level, through the “virtual” (rather than actual) fulfillment of our happiness requirements, or through the obsolescence of these criterion in the first place (over-writing / re-programming).
  • If in fact we do merge with machines, it seems very probable that we might not “live” as humans at all, but merely experience a virtual reality or a “nirvanic” state – holding our consciousness in a perpetually fulfilled “mode” without any requirements on our end (this could be through a liberating or crude kind of futuristic wireheading).
  • If our intelligence is to be merged in with the rest of humanity, it seems as though any semblance of individual consciousness of “feeling” may be irrelevant in the first place, and either fade away or be blended into some kind of aggregate consciousness – within which we may or may not maintain any of our past awareness at all – and may be the equivalent of death for the individual consciousnesses merged into it.

Food for thought. It seems as thought the applications of philosophy and ethics have had no more an important time in our lives than now as we imagine our actual ideals – and the furtherance of human potential in general.

In political philosophy, ideals and principles might have taken hundreds of years to come to fruition, but in the construction of and framing of trans-humanity and what is called the “singularity,” we will actively set the tone for a future that we will likely get to see within our own lifetime.

In my opinion, there is nothing more fascinating than the scope of fulfillment and conscious potentiality – and as a race there seems no more important conversations of our era than that of molding what consciousness will become and how we will all be effected. Personally, I consider this to be the most pressing moral concern imaginable (see “the cause“).