Sentient Potential and Stoic Ideals
In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors…
Below is correspondance between myself and Professor Peter Adamson of King’s College in London. Professor Adamson has published widely on Greek and Arabic philosophers, Epictetus being one (the philosopher of particular interest here). After reaching out to Professor Adamson and hearing his spirited podcast on Epictetus – we chatted about the Stoic lens on the modern concerns of human enhancement.
Daniel Faggella:
1) How would Epictetus respond if consciousness could be created in a machine at will (IE: real volition, real experience / perception / awareness)?
2) If the technology was safe and simple and implants were available to increase human intelligence / learning speeds by ten fold – would Epictetus be the first in line? Why or why not?
Professor Adamson:
1) I think that although ancient philosophers don’t ever talk about artificially created intelligence, Stoics like Epictetus would be in an easier position to acknowledge its possibility than Platonists. This is because they are materialists and they in fact think that our souls are physical entities (made of “breath” pervading the body). Actually among Stoics, Epictetus is notably unconcerned with this issue, he rarely talks about the underlying physical theory. But I suppose he would (or should) say that if the machine had a will, as we do, then the machine would be subject to all the same points he makes about human will in his diatribes.
2) I don’t think Epictetus would care much about the possibility of becoming vastly cleverer. Aristotle would probably advise you to go for it, but since Epictetus thinks that ethics/happiness is about having control over one’s will, it wouldn’t really be helpful on his view just to become more intelligent. If you could sign up for more self-control, he might be more interested! Also by the way Stoics think that (all) humans already have the capacity to become perfect sages; it isn’t lack of cleverness that is getting in their way, but bad desires and false beliefs (which are the same thing). So for this reason too I don’t think any Stoic would say that it made a big difference whether you could increase your IQ, so to speak. However they might well say that it is a “preferred indifferent”: not something on which happiness depends, but something you might reasonably choose anyway all else being equal (i.e. so long as it didn’t get in the way of virtue). Things like health fall into this category. The only reason I can think of why a Stoic would hesitate is that they thought that our current portion of intelligence is divinely assigned; it’s not clear to me, though, whether they would think that was a reason not to mess with it. If so then it would indeed clash with virtue to increase IQ so they would say not to do it.
Daniel Faggella:
1) Yes I imagine cleverness would matter pretty darn little in the grand scheme of things. If the mental capacities to be enhanced could include the power of will, awareness and control of emotion, in addition to increased ability to learn – might this be a boat Epictetus would hop onto? Or – is there an innate “Human-ness” he’d want to stick to in some way?
What I mean to say is that: If enhancement were not a cheap “shortcut” to “Sageness”, but a pathway into further reaches sageness – is it something he’d be excited about and get in in on?
2) Would Epictetus be excited about the opportunity for eternal life?
Professor Adamson:
1) Actually the more I think about it, the more I think that Stoics would lean towards cautioning us against wishing to improve/alter human nature. They aren’t exactly Aristotelians about this but they would, I think, hold that god has made human nature the way it is for a reason and that our nature is sufficient to give us happiness. So at most, as I said, improvement would be a preferred indifferent. In fact I think we can go further: improvement is not even possible. Your new version of the central question is illuminating here — would E. wish for more will power? I think he would say: you already have your will completely under your own control. That is the whole point he wants to make about the will, in fact. So there is no room for improvement, the problem is not that people lack the tools to be happy but that they are failing to use the tools they already have! Or to put it in your terms, you don’t need anything further in order to progress towards sagehood in terms of innate capacities, you just need to decide to do it and stick to it. That may require training and “spiritual exercise” etc but that is not the sort of thing that one could have by magic.
2) About eternal life, I don’t think he would think that eternal life was necessary or desirable. Accepting the term of one’s life is a classic example of something Stoics teach us to do.
Undoubtedly, it is rather difficult to postulate the opinions of a man after he’s been dead for so many thousands of years, but the end of this “mental experiment” is more the application of Stoicism than the particular reaction of Epictetus himself (though, if he were around, I’d certainly ask).
My supposition is that it is possible that there are a great many barriers to our control of will and that enhancements might not only increase capacity (to learn, to move, to do), but also will and volition – self control and self determination. Both in terms of the circuitry of the brain itself, and the perfect of our self-possession I think that it could be supposed (either correctly or incorrectly) that improvements might be had. However, it must also be conceded that in the unknown dimension of consciousness, the Stoic inkling might be correct, and making the most of our will as it is is as high a task as we might ever strive, “enhanced” or not.
Personally I doubt this notion that humans have perfect sagacity and volition already, and I see much room for improvement in the hardware and software we’ve been given (though we aught be careful in not violating what we don’t understand and “leaping” into transhumanism without proper precautions).
– – –
An extra thank you to Professor Adamson for contributing to our series. If you’d like to hear his podcasts on the history of philosophy, you can find them here, and you can learn more about the Professor himself at his King’s College page.
All the best,
-Daniel Faggella
In my writing about superintelligence, I often openly wonder about the values, ideals, and preferences of such an entity. In fact, my initial series of interviews with ancient philosophy professors…
This article was written as an exploration – and has no initial agenda or objective outside of discovering potential interesting connections through a look at transhumanism through the lens of…
Seneca, Epictetus, Aurelius – even westerners unfamiliar with “Stoicism” recognize many of the names that brought this Philosophy to bear. Today, there are few self-professed Stoics in the world, but…
I’ve gotten my hands on a new copy of Human Enhancement, edited by Julian Avulescu and Nick Bostrom, and the first article I chose to delve into was titled: “Enhancements…
With a lot of recent delving into the applications of AI, I had the good fortune to speak with the man behind Pandorabots.com, Doctor Richard Wallace. Dr. Wallace began work…
After getting in a fantastic interview with Professor William O. Stephens, I was connected to the www.stoicscollege.com website, where I bumped into a number of other thinkers I knew I’d…