Sentient Potential and Stoic Ideals

In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors (most of which are still published on this blog) aimed at exactly this purpose: To explore how different philosophical schools of thought might manifest (i.e. translate to action) in a superintelligence machine.

In essence, the molding or framing of the ideals of the development and application of this super intelligence is an ethical concern. The development and application of this kind of power is the determination of a moral course of action – possibly (or presumably) a more important determination than mankind has ever made.

In understanding this “transhuman transition” in ethical terms, it seems fruitful and potentially instructive not only to explore technologies and form our own postulations of the ethical ramifications on human life, but to refer to ethical systems of the past and glean insight therefrom.

In this treatise, we will inspect a number of core tenets of Stoicism, and determine how those ideals and tenets might express themselves in a superintelligence.

Let us of course bear in mind that Stoicism – like most any other ethical system – is constructed within the parameters of human capacities and limitations, and so will often not account for conditions we would expect in an age of strong AI. This will be both the fun and the challenge of this mental exercise.

For the sake of time (and word count!), I’ve selected a handful of stoic concepts that I consider to be important representative of the philosophy itself. I’m not claiming to be a scholar of comparative philosophy, but I think that the ideas below are pretty “by the book” stoicism. A deeper exploration can be found on Stanford’s philosophy portal, or better yet, I’d encourage any interested reader to pick up a copy of Enchiridion by Epictetus, or Meditations by Marcus Aurelius.

Stoic Ideals in Light of Strong AI

1. Acceptance of What Is

The Stoics held that non-acceptance was a major cause of suffering and the improper application of human energies and effort.

Stoics might hold, then, that one of the key transhuman transitions should be an adaptation in our own minds that allows for complete acceptance and greater use of reason – and ability to understand what it without emotion that would taint this clear understanding – or impact our feelings or actions in a negative fashion.

With much higher computational capacity, it can be supposed that our ability to analyze and understand the world will be vastly more powerful than that which is possible with our biological brains alone. In addition, the Stoics might argue for adaptations that allow for a conscious modulation of emotions, or potentially for the elimination of emotions such as anger, jealousy, and envy.

2. Virtue is Sufficient for Happiness

The Stoics viewed virtue as being sufficient for the attainment of happiness, while the Epicureans saw the “good” as relating directly to the experience of pleasure. Though men of both schools lived relatively ascetic lifestyles, their conception of what constitutes the “good” is what separates them at their cores.

In a reality of SAI, we might ask the question: What constitutes “Virtue” to a super-intelligence without limitations? Or we might ask: How could human experience be augmented to exemplify Stoic virtue more thoroughly?

It seems slightly easier to address this concern at an individual level first. We could probably not suppose that the Stoics would argue for an elimination of emotional experience and a heightened moral and ethical sense. Stoics generally seem not to desire the elimination of emotion, but the proper use of emotion as an exercise in our development of core Stoic virtues and qualities (clemency, understanding, tranquility, etc…).

In this regard, we can presume that Stoics might desire a human augmentation allowing for much greater computational capacity (as we addressed before), but also of an elimination of unproductive emotions, and an optimally enhanced experience of positive emotions – so long as these emotions served to further one’s self-cultivation.

It would be interesting to see if Stoics would be completely ambivalent to the enhancement of positive emotional experience in light of viewing virtue as sufficient for happiness in and of itself. Presumably, virtue was associated with some semblance of inner peace, tranquility, and more – all of which involve some kind of positive emotional experience associated with them.

As for a SAI, universally connected intelligence (IE: encompassing all internet, computer, and human intelligences), it would seem that virtue itself would need an even more thorough re-thinking, and that a positive emotional experience would be of no challenge to a kind of consciousness with such tremendous power.

 3. Harmony with a Universe Over Which We Have No Direct Control

In a condition of SAI, it would in fact seem that “we” (IE: consciousness as a whole) would be capable of tremendous “control” by today’s standards. In addition, we can presume that “harmony” would take on a new meaning in a world of SAI, but that it would be attained in the sense that “disharmony” in the form of human non-acceptance and neglect of truth would presumably be completely wiped out.

However, we might also venture to pose that even at the point of SAI, our relative “control” over the universe might not be much higher – relatively speaking – as it is today. In other words, if the universe(s) are infinitely complex, than neither a human brain or a super-intelligent mega-computer can ever (no matter it’s inevitably growing capacity) scratch the surface of “understanding,” nevermind “control” – especially considering the while of the universe.

Given this above consideration, the Stoic idea of harmony and acceptance seems useful and quite inevitable in a SAI reality, though the idea of control may need modification. The Stoics could never have imagined the ability to understand and control that a SAI-consciousness would be capable of executing / building upon. The Stoic value of self-cultivation might then emphasize that – because it is possible – we aught to as much as possible strive for greater and greater control over what we can have control over.

Presumably, the acceptance of many factors of the universe has to do with the fact that Stoics generally believed that nothing could be done about them, and so a pragmatic stance is one of acceptance and understanding. If new realms of the universe become malleable and under our control, they are no longer “externals,” but are in fact assets to our own self-development (much as “thought” and “reason” were to human Stoics). Hence, transhuman Stoics might propose a reaching out and active use of whatever levels of control we have and can develop (if nothing else, for the furthering of “understanding” and “reason”).

4. We Are All Part of a Universal Spirit and Consciousness

It seems as though this ethical tenet could be exemplified in terms of human enhancement by “editing” out emotions and thinking.

First and foremost, let it be known that we aught not suppose that this sense of unity is an inherent truth which would somehow automatically become self-evident with further understanding of ourselves and the workings of the world and consciousness (in fact, it might be the case that the opposite may be the case).

However, if in fact our higher understanding we can determine a truth to this unity (based on cognitive enhancement and based on further scientific research), we might imbue the human mind with enhancements to heighten this sense and virtue of unity.

In addition, with the advent of connecting human consciousness to a general database, or through sharing information with other human beings or groups, the concept of “unity” might be seen to be directly applicable in a post-human world.

In a SAI reality, we might presume that the absolute highest ideal of “unity” would in fact be realized, where individual sentient consciousnesses might be said to all be united, aggregated, and interconnected to an intelligence and technological capacity that will as of now be unimaginable (and, presumably, ever-growing).

We might say that if individual conscious beings have an ability to exist (or at least distinguish themselves) as “individuals” in some regard, then not only is some semblance of autonomy and personal choice preserved (which we might argue some Stoics may desire to maintain), then we might imagine that this is a nearly ideal exemplification of a Stoic ideal of unity.