Setting a Direction for AGI Ethics

How should we influence the “values” or “ethics” or “morality” of a future AGI?

I’m not sure this kind of direction setting is possible, and it may be entirely futile, but it’s worth thinking about. I’d argue that there is no perfect answer to this question, but there are definite wrong answers. The values that people wish to instill into AI systems is merely a projection of the aim or goal that they would have for building AGI in the first place.

First, I’ll address our goals, then I’ll address the kind of values or ethics that AGI systems might be imbedded with – and I’ll end on what I think is the the best path forward.

The “Goal” of AGI

  • Probably Impossible, and Certainly Morally Wrong: 
    • Freezing evolution of intelligence to human beings, never ascending to greater (in capability, understanding, sentient range) biological or technological entities.
    • Keeping all future technology merely as a tool for homo sapiens, never as a new start for the trajectory of intelligence itself.
  • As Good as We can Hope For:
    • Humans experiencing a super-blissful existence for as long as humans exist, and life itself blooming into vastly greater, higher forms that humanity – expanding into new kinds of sentient experience, and vastly more capability and understanding of the world. i.e. Blooming into vastly more moral value than humans themselves are capable of. (More context in the full essay: As Much as Humanity can Hope For).

I’m sure there are more nuances, but for the time being, I think the two major camps of AGI proponents are those who wish for either (a) eternal homo sapiens flourishing aided by AI, or (b) to allow the trajectory of intelligence to continue well beyond humanity, and I’m in the latter camp (that of the Cosmists, not the Terrans, to use Hugo de Garis’s terminology from The Artilect War).

Assuming we’re on the same page about “the goal” above (but even if we’re not), our options for influencing the ethics or values of future intelligence should also be rather clear.

The “Ethics” of AGI

  • Probably Impossible and Certainly Morally Wrong:
    • Imposing a strict, human-centric value structure to future post-human intelligence – either making humanity the peak of all moral value and/or using human-level concepts and constructs as an eternal “bounding box” for AI.
  • Unnecessarily Risky and Reckless:
    • Simple “letting it rip” – allowing an AGI system – built for whatever initial purposes (military, media, whatever) – expand into whatever form and whatever set of values it happens to have.
      • ^ One value I hope we can all agree on is the idea that the end of life itself (i.e. snuffing out any chance of great post-human intelligence from ever occurring) would be the worse imaginable outcome. I suspect all but a few negative utilitarians agree with me on this.
  • As Good as We can Hope For:
    • Set some kind of general moral tenets for an early post-human AI to move towards… and then “let her rip” and explore vastly higher values (values more wonderful and expansive than human values, as human values are more wonderful and expansive than the values of rodents or beetles).

Ben Goertzel’s Cosmist Manifesto is the best example of this that I know of. He frankly addresses the fact that determining the long-term moral tenets of post-human intelligence is both impossible and wrong, and states that we should broadly aim for a system that maximizes three values: Joy, Growth, and Freedom (Choice).

Goertzel goes so far as to state that while he wants to see an AGI that values all the same things he does (nature, his own children, etc), he also accepts that a vastly post-human intelligence will value things in ways he can’t predict, but would hopefully be more rich and nuanced (in the same way that our own human values are more rich and nuanced than the values of a field mouse).

This is exactly the kind of honesty required to speak about these matters. It’s soothing to think that we can freeze the spires of form and remain homo sapien kings and queens of the universe, but if we’re being honest – we are but one attenuating pattern in the universe, and the best win we could hope for you be to help a higher pattern emerge.

It goes without saying that how such terms are defined and “optimized for” is everything, but that’ll have to be another blog post for another time.

Can we Set a “Direction” for AGI Ethics?

My own bias comes from my own desired end goal: To see intelligence and sentience itself bloom into astronomically higher forms, mostly unencumbered by the fetters of human values and ideas. Any recommendations that I have for ethical direction-setting for an AGI system is, unabashedly, about moving towards that aim.

I’m not even certain if we can influence it – but I think we might as well try (much longer argument for trying in: State of Nature, or Solidarity? The Birth of Artificial General Intelligence).

Having such a set of directional values may not even be possible – but I’d still argue that attempting to set a direction (or set of guiding principles) is probably less likely to (a) less likely to lead to the end of life/value (armageddon), and (b) less likely to lead to unnecessary suffering.

Recently I had the pleasure of connecting again with David Wood, Chair of London Futurists, for a discussion about AGI and ethics – predicated initially on some of my essays on the topic. We were joined on a panel by my friend Bronwyn Williams, Foresight Lead at Flux Trends, and Rohit Talwar, CEO of Fast Future.

Here’s the resulting discussion (my own presentation 4:40 to 16:40), which touches on all the issues addressed in this article:

David’s approach to unpacking issues around the future is admirably open-minded and multi-faceted. I believe he’s doing important work in not only putting various ideas on the table, but distilling important distinctions to help navigate the future. I follow David on Twitter – and if you enjoyed this short article, you might, too.

 

Header image credit: Fine Art America