Potentia and Potestas: Achieving The Goldilocks Zone of AGI Governance

Some AI thinkers and funders believe that any governance is a net negative for innovation, and signals dictatorship or oligarchy.

Arguments for this position include:

  • Governance would prevent economic / health / other benefits from AI
  • AI governance would require control and transparency into compute, and would require much more intense and potentially tyrannical
  • It would be great if we could coordinate, but the West can’t trust China to cooperate at all

In this article I’ll argue that for AGI to “go well” for humanity or for posthuman life, some governance is almost certainly necessary, but this governance should increase the collective amount of power and choice to those that it governs, rather than decreasing it.

We’ll examine the pursuit of this power and choice-increasing “Goldilocks zone” of governance in other domains outside of AGI, lay out some possibilities for humanity going forward.

The Goldilocks Zone of Governance – Balancing Power (Potentia) with Control (Potestas)

To explore the Goldilocks zone we’re discussing, lets first lay out two terms from Spinoza:

  • Potentia refers to power as capacity, ability, or active force. It is the inherent power of something to act, exist, or express itself. Potentia is closely related to conatus, the striving or effort by which each thing seeks to persist. For individuals or groups, potentia is the power to affect and be affected, enabling them to flourish and realize their potential.
  • Potestas refers to institutionalized or externalized power, authority, or domination. It is power as command or control (associated with laws, rulers, or societal structures in Spinoza’s work).

Potentia is a kind of primary inner force of expansion – the drive for living organisms or organizations to extend the set of powers that will allow them to get what they want, and to stay alive. Potestas is the outer binding forces of power that create structure and limitations around the inner-expanding potentia.

Here’s a breakdown of these two forces:

Spinoza himself lived during a time of oppressive monarchies and an arguably even more oppressive church. He despised potestas as a means of enforcing beliefs, or exclusively serving rulers at the expense of the collective.

But he very much valued laws that protect freedoms, prevented discord, and – in general – aggregately increased the power and choice of those under those laws.

Because our primary focus here is to focus on human coordination, I’ll use a few examples of current “goldilocks zones” of coordination and governance – including a hypothetical goldilocks zone for AGI governance:

While we should absolutely be wary of tyranny drummed up from fear mongering, AGI is clearly such a large and complex international issue, an existential threat to earth-life, that limiting our coordination to nothing more than the international state of nature (a brute arms race of economic and military capabilities) would be ridiculous.

As in other domains, a Goldilocks zone exists for AGI law.

The wonderful free market has lifted huge swaths of the world out of poverty and elevated human creativity and quality of life drastically.

But the free market doesn’t handle everything (as evidenced by the graphic above), and it seems easy to argue that what the free market misses is especially dangerous in the context of AGI:

What the “No Governance” Position Requires

Black-or-white positions of “basically tyranny” or “100% hands off” patently obviously don’t make sense.

In order to advocate for absolutely no additional coordination or international law around AGI (when it is patently obvious that we need such coordination for speed limits, policing, financial crime, etc), one would have to believe either:

  • AGI won’t ever happen (or is 100+ years away), or
  • AGI wouldn’t be all that disruptive. Self-driving cars, a better Siri, no big deal. It would remain a tool, without agency or ability to act beyond minor initial prompts from humans

Fewer and fewer people believe that AGI won’t ever exist (famously, Bengio, Hinton and other titans of AI research became advocates for AI safety only after ChatGPT).

Many of the staunchly “it’ll never happen” camp have had to change their tune to the “it’ll just be a tool” position. Perhaps today that’s where AI stands, but AI doesn’t “stand” for very long – as the last decade of progress have clearly demonstrated.

And if it takes off in a military context, it’ll maintain a kind of mostly-peaceful stasis between the great powers – it’s not like it would kill us all or anything. This argument essentially equates AGI (a technology vastly more capable than humans) with coaster ovens and automobiles. Whatever existing regulation holds up for that tech will hold up for AGI, so the argument goes.

This represents a profound misunderstanding of the kind of thing we’re potentially conjuring here. 

If AGI doesn’t have agency and unbearably post-human capabilities in year 1 it will in year 2 or 3, our timelines are short as hell to having something we (a) don’t understand and (b) obviously can’t control or make “go well.”

Many separate articles must be written about countering these “AGI will be alright anyway” arguments, but here’s a few reference links:

  • Hinton explains his rationale for seeing AGI as an eminent threat to humanity, and why he suspects it’ll have agency, and concerns for things very much outside of what humans care about.
  • Connor Leahy makes a concise and compelling argument for why even pre-AGI systems are likely to be agentic and operate without much human oversight.
  • In Against Inevitable Machine Benevolence I argue against many common AGI copes, including the “it’ll treat us as pets,” and the “it’ll be grateful to us as we are to our own parents” scenarios.

But even if you somehow do think that AGI won’t be that disruptive, just in case Hinton or Bengio are right (even if they’re off by 15 years), it would make sense to at least have plans in place so that if AGI did clearly become a net-negative for earth-life, we could reel it in and ensure we don’t do anything stupid.

This takes us to what we should be doing in the near term:

Near-Term Requirement for Potestas to Achieve a Beneficial AGI

“AGI safety” isn’t the right vibe for everyone. 

For some people, “safety” implies a kind of government nannying that is overtly wrong in almost all cases. “Don’t touch my guns, and don’t touch my GPUs!”

Yet even with guns we have rational laws for their licensing, registration, and use.

Some people think the solution to everything is government nannying. “We don’t need to worry about it, just pass the responsibility to the government!”

What both the “nanny” fans and the hardcore conservatives have in common is: 

Wanting AI and AGI to “go well.”

There’s a lot of agreement on “go well.” Here’s a handful of things most of us would agree on:

  • An increase in the freedoms and powers of individual citizens.
  • Survival of human beings (not being immediately killed off or displaced by AGI)

We disagree on how risky AGI is in the near-term. We disagree on what specific degree is most warranted for what specific kinds of risks, but at least we all want AGI to “go well” and we can share a table about the crux issues that separate us, and work on them.

Potestas done well gives humans more freedom, more benefits, more power – in addition to reducing useless risks.

Here’s a look at how potestas done right actually opens up more possibilities for freedom and power (potentia):

A Local Example or Potestas:

If there were no laws against theft and murder in Massachusetts, I would be forced to spend most of my time stockpiling ammunition and food – and looking out my window with a rifle. Growing a business, writing philosophical papers, etc would be out of the question.

Thankfully, human beings in Massachusetts have agreed to retain their powers to steal and kill (potentia) in order to submit to a larger overall power (potestas) in order to increase the aggregate capability and flourishing (potentia) for the collective (the people of Massachusetts). More freedom, more innovation, more opportunities to make science, business, and the human condition “go well.”

(We could use rules of the road as another example, or the FDA example used in the image above. These are all examples of places where net potentia increases by subjecting ourselves to a bit of selective potestas.)

AGI (International) Example of Potestas:

If there were no laws in the global AGI race, nations and labs would have to compete purely on who could build the most economic or military advantage as quickly as possible (i.e. the current state of affairs). Working on understanding how AGI would treat humanity, or if it is conscious, or if it has agency, or what it wants to do with the universe… all of these concerns would have to fall by the wayside (again the free market doesn’t help with those).

Unfortunately, that’s exactly where we are in the AGI race. Total runaway potentia headed towards conflict on many fronts by conjuring something we don’t at all understand.

A total lack of even a modest amount of international potestas. We could potentially align on what preferable and non-preferable traits might be for the sand gods we’re conjuring, but no – we’re stuck in stockpiling ammo and food and birthing a posthuman thing recklessly.

Solving this issue would involve some kind of serious international AGI governance scheme, especially between the USA and China. Nothing about this is easy, but hopefully we can make our way there without a disaster.

The Long-Term Requirement to Accept Human Attenuation

This article was not intended to lay out a specific playbook for AGI international policy. People smarter than myself will build viable options there (I like CIGI’s model as a good start).

This article was intended to drive home two modest and reasonable points:

  • Have no governance whatsoever over a technology so powerful as AGI is ridiculous and reckless – some potestas is \.
  • There is no way to “freeze” the current world order, or even the human species the way they are, via governance – and we ought strive to influence the trajectory while embracing that change comes along with it.

A dynamic balance (rather that a black or white orthodoxy) is certainly what we need at the precipice of the singularity.

Between immersive AI worlds, AI’s impact on our economy (Gradual Disempowerment), social fabric, brain-computer interface, and other impending changes – the balance of potestas and potentia is certainly leading somewhere beyond humanity.

Nature has always attenuated its specific forms (species, individuals, etc), and we just happened to be born at a time where this process accelerated so quickly as to 

There is no way to “govern” the world to stay the same forever. 

There are many, many more forces moving us towards ascension of humanity than pulling us to preservation of the human form. The best we can do is bend the trajectory of events towards preferable futures, and away from un-preferable ones:

Spinoza himself believed that, ultimately, potentia is more fundamental than potestas, and that expansion and expression will and should continue beyond any kind of control. 

We are turning into something else, and the questions are:

  • What are we turning into / should we turn into?
  • How can we facilitate that transformation while avoiding horrible outcomes for the trajectory of life?

Potentia is already expanding beyond humanity as it is – and potestas may help to guide and extend these powers in the right way towards the only one of the four long-term outcomes for humanity that is a net win for life itself.

Fingers crossed we’ll strike the right balance along the way. We might as well try.