A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around the topic of intention today are mostly unknowable and potentially undetectable, as intention itself takes place almost entirely inside our heads (until manifested), and often outside of our conscious awareness.
In a transhuman future, this may not be the case at all (as we’ll discuss), and it’s worth pondering these future possibilities as our technologies develop and potentially allow for them to take shape “in real life.”
The three topics that we’ll explore include:
Some people might argue that the understanding and determination of our intentions and motives is possible even now given our present mortal capacities. I would argue that though it is admirable (and in the author’s opinion, desirable) to have self-knowledge and to be self-directed, even the most sagacious amongst us is unable to – with any certainty – claim the precise motives for his or her actions, never mind understand them.
Undeniably, we as humans are much more complex than any one goal might define. At present, a machine may be programmed to, say, file data into a database, or to move an item from one warehouse to another. As humans, rarely if ever could such a simple, “programmed” model be applied to our own behavior. We seem to seek fulfillment, significance, relationships, a grounded sense of self, etc…, and boiling these elements down to a single objective or aim is grossly superficial.
However, if self-concordance continues to be a contributor to fulfillment in a transhuman future, then the ability to manually “calibrate” that which motivates us may allow us to more truly understand ourselves and live up to our values. This may or may not be simplistic (and who knows if our experience is made better or worse by simpler or more complex motives for our actions), but it poses an interesting model of how we might come to understand and develop ourselves in the future.
In this way, cognitive enhancements and brain-machine interface technologies might allow people to become who they want to be, and be motivated by what they want to motivate them. A few thought experiments to flesh this example out:
Potential dangers arise here when we program particular motives which remain “stuck” and end up being harmful for ourselves or others. We might imagine someone whose programmed single motive is to improve as a musician, allowing him to even justify murder in order to get in addition practice time (if he could be sure nobody would find out).
This honed-in motivation may not allow for remorse, for empathy, or for grief, and so may in term create a potentially less rich experience for the agent, but certainly a more dangerous world for everyone living in it with him. On the other hand, adequately selected or balanced motivations may yield in a more consistent effort and energy from the agent in a direction which is better for them and those around them, and so this kind of self-programmed intentionality at least seems to have the potential to benefit both individual and society – but it’s not clear how easily this benefit could be yielded.
Nature calibrated us to balance the needs of our health, our social bonds, along with many other factors – and simply hijacking this complex (and largely functional) system would probably be unwise, and would involve lots of testing, tweaking, and research.
Things get pretty Orwellian pretty quickly when we consider the possibility of having a government or other entity program our intentions or in any way determine our intentions for us. On the one hand it seems reasonable to consider that – in a world of potentially dangerous emerging technologies, we wouldn’t all benefit from the eradication of the desire to kill, to steal, or potentially to lust for power (as we mortals are – at present – capable of all three on a grand scale).
A world where our “intentional range” is limited by types of motives that generally benefit ourselves and benefit society may in fact me the only way we might remain a “society” at all.
In a transhuman world where biotechnology and other kinds of “enhancement” have made humans more and more capable, it seems that this capacity for destruction and conflict would indeed multiply along with our abilities. With needs, goal, and values even more varying than we see today, it seems – from one perspective – that “keeping the peace” would be nearly impossible without “uploading” to entirely virtual realities (where no other sentient life forms may be harmed), or some kind of constraints on our abilities to think of or act on ideas that could harm others (I’ve posited this theory in my “Epitome of Freedom” essay).
At the same time there poses the question: Who determines these standards?
The “determiners” could very well be the wielders of power who leverage the controlled populace to do their bidding, possibly resulting in a much worse world for most sentient life.
Similarly, if the ability to determine the motives and intentions of humanity were created and used, it may result in a lack of new ideas, or problem-solving, of genius – and also of fulfillment. This would not necessarily be the case (for example, motives could be determined to be “fruitful innovation,” and our need for autonomy in order to attain fulfillment is an algorithm that may be “tweaked” in a future of this kind – so that we still might remain happy).
It seems important to be wary of any power controlling societies minds, as this would place the power of the world entirely in the hands of a given controlling party. If “power corrupts, and absolutely power corrupts absolutely,” then this control would seem to lead inevitably to humanity being taken advantage of by the will of the controlling agent or party.
However, if this party were a super-intelligence AI geared towards the harmonious and flourishing fulfillment and expression of humanity, then life may very well be significantly better with either total control (and mere “felt autonomy”), or with limitations and guidelines inherently set on our inner intentions, promoting a unified goodwill and harmony among people. We might imagine that a superintelligence AI might want such an “intention alignment” to bring about the cosmopolitan spirit in all human beings (who are otherwise prone to be ruthlessly selfish and tribal).
Though disturbing, it is also important to understand that what is best for human fulfillment is not necessarily what is “morally best” in any innate way. From a classic utilitarian standpoint, “the greatest happiness principle” seems to apply to sentient artificial intelligence in addition to human beings.
Another interesting potential development in terms of furthering a transhuman notion of “intention” would be a new kind of transparency – the ability to detect and discern the intentions of others around you.
This ability is particularly interesting because in society today, we tend to live and interact with others in a way that conveys that even if our intentions were known, others would not be disturbed by them. I’ll be bold enough to pose that generally, we convey the appearance of the intentions that we think are most conducive to getting to our desired aims in life – a point that I allude to in the introduction of my TEDx presentation from 2017.
One who seeks for maximal profit in a business transaction aims to put on the intention of more of a helpful and discerning consultant, rather than a salesman. Being frank about wanting to make as much money in the transaction as possible simply wouldn’t be conducive to his aims. A man courting a woman will go to the movies and pay for dinner as though these are the sole ends which he seeks (the immediate conveyance of which would not usually be conducive to those aims). A leader aiming to destroy a rival will do better to attain his ends by proclaiming and appearing to stand for a specific set of values, or providing greater value to customers.
This game of appearances keeps a semblance of trust and pleasantness in human interactions as we oft pass each other by, conveying the pleasantest well-wishing and most noble or humble intentions. It seems safe to say that if we could walk about for one day and read the genuine intentions of those around us, we would be duly disturbed.
This “reading,” however, implies that intentions are somehow detectible or discernible. It would seem that in some respects, “intentions” are ephemeral notions, composed of dominant desires, prominent goals, and ongoing inner dialogue. If we were to “read” intentions, it would seem that we would need a better or more accurate determination of the constituents of “intention” than we may hold today.
If it became the norm that intentions were readable, it would seem as though we would either become accustomed to what are now relatively disturbing intentions of those around us, or we would somehow move forward in our development towards a kind of society where malicious or detrimental intentions and thoughts are somehow ferreted out and eliminated, either by arresting individuals or by “programming” the minds of society.
Though I’ve seen neither film, the thought-reading scenario has been explored in the movie “What Women Want” starring Mel Gibson, and the persecution of future acts has been explored in the movie “Minority Report” starring Tom Cruise.
As a thought experiment, it’s been interesting to pose what increased control over “intention” might imply for a transhuman future. Based on my present democratic notions of freedom and autonomy (and present constructs of what comprises “fulfillment”), it seems as though the first experiment (the more deliberate control of our own intentions) is a potentially preferable human experience.
The notions of “being programmed” in a way that limits our intentions, or “being monitored” in a way that detects our intentions both seem threatening to freedom, and appear to both be adequate tools of ruling and controlling society by a given ruling power (or intelligence). Though the prospect of “being programmed” seems disturbing and detestable, I would not be bold enough to stay that it would inherently “bad” in every respect. In fact
The ideas posed above are, again, mere thought experiments, but seem to be serious ethical and societal issues if emerging technologies succeed in allowing for the “tinkering of consciousness” which I refer to as the “ultimate ethical precipice.”
The good news is, we don’t appear to be faced with these issues anytime soon. In fact, you may never know my intention for writing this article at all. Maybe, neither will I.
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
I’ve been diving into “Human Enhancement” (Oxford) as of late, and came across a topic that was bristling in my own mind before I saw the name of the chapter….
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…