Strong Phronetic AI: Building the Most Important Intelligence

(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my perspectives [as of Dec 2023] on the “AGI we should build” are best summed up in Potentia and Worthy Successor. The idea of phronesis, is similar to Spinoza’s concept of adequate ideas, which is merely one part of potentia. Regardless, I leave this article up for you to peruse – thought I do recommend the two newer articles referenced above.)


As of late, much of my thinking has been anchored on the ideas of a “beginning and end” of the transition to higher levels of consciousness and conscious power. By “beginning and end” I mean: the transition into and eventual ideals of the developments in consciousness.

The “transition to transhumanism” has been the phrase I’ve used in discussing and exploring our major steps forward in the expansion of human potential, and real change in the human experience itself – taking us vastly beyond a subtle “augmenting” of our capacities (as the technology of today – such as the internet or cell phones – might be said to do) to a point of tremendous mental enhancement.

This short article will focus on the beginning with respect to the kinds of intelligent entities we will create. The questions being addressed are:

  • What will be the most important initial form of intelligence for us to create?
  • How should these technologies be tested? What might be the social or political implications?

Some might suspect that a kind of first superintelligence might serve us best if it could solve all medical diagnostic problems or predict financial markets. On the other hand, having the first superintelligence focused purely on financial predictions might also be seen to have it’s downside.

Strong Phronetic AI

I pose four linked propositions:

  1. The ability to alter and create consciousness holds more ethical gravity and practical importance than any other matter known to man today – something I’ve harped on consistently in my first and second TEDx talks. (Read: Steering Sentience)
  2. As humans, our motivations, assumptions, and methods of thinking are riddled with error and with potential self-deception. (Read: Fries on the Pier)
  3. Attaining the “best” or more beneficial future cannot be done with a concrete plan, but must be a constant and vigilant process of discerning possibilities and factoring in new trends and information.
  4. Hence, it may be best that the first superintelligence that we create be an intelligence built with the explicit goal to aide in better and more accurate decision-making about the developments of humanity and the ethical impacts that lie therein.

Our human capacity to think through complex ideas of the future are limited in a great many regards. We are limited in how much we may know or learn (having only one brain), we are limited in our capacity compute and predict, we have a great number of psychological heuristics which limit our perception and understanding, and it is also essentially impossible for us to truly know or calibrate our own true intentions and motivations.

If there were to be a super-intelligent machine, might it’s best job not be precisely this kind of predicting and calibrating towards “the good” (what we should do next as a species), but void of human limitation?

Given the massive ethical gravity of the ability for machines to create or tinker with consciousness itself (which is to say, the only experiencer of feelings and conceiver of morality in the first place), it seems only appropriate that our first super-intelligent entity might take on the task of modulating and monitoring the development and application of future technologies of incalculable ethical ramifications.

In philosophy, the term “phronesis” implies practical wisdom, and it would seem that a super intelligent machine might be capable of much more effective and thorough application of this faculty – while potentially (assuming the programming is done correctly) be void of many of the vicious tendencies of human thinking – or even any overtly self-serving motive (though this is easier said than done).

The process that it seems we would aim to apply to the proper development, and just use of sentience-altering technologies would be vigilant, diligent, and careful. Exploring economic, political, or social ramifications would best be done currently by pooling expertise of various humans – though we can imagine a machine that might be able to consider more than a thousand humans, and to do so without an “agenda” of it’s own, other than determining the potential paths of the future and aiding in our decision-making, policy-making, and ethical considerations form a much higher and more rounded perspective than we as humans might ever be capable of perceiving.

We might refer to this machine as a “strong phronetic AI.”

This idea is similar but not identical to the notion of an “Oracle AI” posited by Nick Bostrom.

Alternatives to the Strong Phronetic AI

Alternatives to this kind of construction of a super-intelligence might be the random creation of various super-intelligences all over the world, all moving forward on their own initiatives and serving their separate functions. We might imagine some innate challenges of superintelligent AIs being created with arbitrary goals which could be carried out to a degree that would pose threats to humanity.

Bostrom famously addresses the potential issue of a superintelligence designed solely to make as many paperclips as possible, proposing that such a machine, with no restrictions or other rules, may very well destroy the earth and expand it’s intelligence into the universe all expanding it’s capacity to make the maximum number of paperclips (converting all materials into paperclips – or machinery used to make more paperclips). The same might be said of any other particular domain.

It might be supposed that each developed system might be programmed with it’s own “phronetic” and ethical awareness, and through these multiple lenses we might come to find a more rounded conception of phronesis, particularly if it is possible for these machines to communicate and cross-reference each other, or gain access to the learning of each other.

The potential problem here, is that if these many strong AI are built with phronesis as a secondary consideration to their first purpose (say, predicting trends and making policy changes in the housing markets), then it’s phronetic emphasis might be tainted by a higher purpose. It seems more than safe to assume that the construction of an intelligence far beyond that of humans aught be constructed with extreme caution with regard to the power it might wield over people, resources, or other computers.

The danger might be a “running amuck” super-intelligence with any agenda at all outside of determining and helping humanity move towards it’s most beneficial future (of course, “beneficial” and “best” are subjective, but we would likely gain a rounded and better view of them when compared, contrasted, and considered by a machine millions of times more cognitively capable than we).

At least from my current conception, this idea of the strong phronetic AI seems to be the best type of artificial intelligence for us to emerge with first.

Difficulties

It would seem as though the grandest efforts in creating a strong artificial intelligence will be in governmental organizations or in public companies with massive research budgets and commercial interest in intelligence. If this is where the resources tend to lie, then the initial construction of a machine geared toward “phronesis” might be far less likely than one designed to forward the agenda of the organization which constructed it (for a corporation: Making profit – or for a military: Making weapons), as opposed to serving humanity and our aggregate betterment as sentient beings.

Such a “benefit of all” motive would potentially require a world government of some kind – and even then it could easily be argued that the control of resources and decisions of such a governing body would be biased and bent by the people or groups with the most power within that consortium (Dec 2023 update: I’ve since written about this in depth).

Moving deeper, we might state that any computer programmed by humans is liable to the same vices and errors as humans, or may be subtly adjusted during it’s construction in order to further the agendas of those who constructed the entity. It would be humans who would have to orient this machine in the moral landscape, already tainting it’s views and possible outcomes just by how it would be constructed, and what data it would train on.

We might, for instance, have a vast team of experts consistently check the work of other teams to make sure no secrets initiatives might be programmed into the machine. Though we might assume there would be a rigorous testing method with regards to ensuring the benevolence of the machine – though the depths of Friendly AI testing are concerns more fitted for MIRI or LessWrong than this blog.

In addition, putting any kind of “halt” on the technological development of other countries would be particularly difficult. In aiming to make a phronetic AI the first AI, we would need to have other countries agree to this as an imperative, and empower those other countries to halt the non-phronetic strong AI efforts of other nations. This, again, poses difficulty and conflict – and would probably require some degree of global governance, which almost certainly would be resisted by many nations.

Conclusion

I would certainly not be the first candidate for the construction of this super-intelligent machine, but as of the last three months, it has dawned on me that a responsible first organization or nation might aggregately bring the most benefit to the world by the construction of an intelligence with this kind of phronetic objective and purpose.

I’m eager to invite arguments to the contrary – as I see any disagreement here as part of the phronetic and vigilant process of creating the best future we possibly can (even if it is laden with the inaccuracies and personal interests of humans like ourselves).