Artificial General Intelligence – Finding the Good is Better Than Doing Good

The universe doesn’t seem to be inherently imbued with meaning, but it would seem that anything we could do to find said meaning, to make our time here “worthwhile”, would fall into one of two broad categories.

I argue that the creation of artificial general intelligence should also be done in order to pursue these two goals:

1 – Doing the Good

  • In the most direct sense, this seems to imply utilitarian good – or – creating wellbeing and/or preventing suffering in self-aware things. Ultimately “consciousness counts” (as I’ve argued in essentially all of my TEDx talks, especially my first one from 2014). While being virtuous or treating people fairly are certainly positive aims, and worth adhering to, the net sentient consequences of those actions (i.e. the impact on the happiness or suffering of all living things) would seem to be the appropriate way to measure the “impact” of an action.
  • There are a number of challenges with “doing good”, including:
    • Measuring the sentient impact of an action (i.e. Donating $100 to a charity, or adopting a child, or choosing to ride one’s bike to work one day) is certainly impossible. The ripple effects of any one given action are impossible to measure.
    • Measuring happiness itself, in a given human brain or animal brain, is not an exect science. Even if we could get tiny sensors into the brains of all self-aware entities (from squid to dolphins to rodents to humans),
    • We aren’t even sure which creatures are self-aware (it isn’t clear whether some mollusks are self-aware, or some insects, or maybe even plants), and consciousness itself is elusive
    • Measuring the future impact on sentience is also impossible (i.e. What is the net total impact on all sentient life over the next 100 years from your decision to become a vegan?)
      • At best, we have proxies, theories, and our own understanding to rely on. The “utilitarian calculus” may be beyond our grasp, but we can try anyway, and many of us do try.

2 – Finding the Good

“Finding the good” seems to fall again into two broad categories:

  1. Finding ways to better “do the good”, as outlined above. This could involve:
    • Gaining a better understanding of consciousness
    • Gaining a better grasp of happiness and suffering in self-aware entities
    • (In any way getting a better grasp of utilitarian calculus)
  2. Finding deeper, more robust versions of “the good” itself. This could involve:
    • Discerning new moral theories beyond the theories we now have (virtue ethics, utilitarianism, the categorical imperative, etc)
      • I am not presuming that there is some kind of moral “truth” to be discovered, but rather, that there may be a deeper conception of what “doing good” actually is. I believe firmly that finding the good beyond our present conception of the good will require post-human intelligence, either through cognitive enhancement or the creation of AGI.

Finding the Good is the Highest Long-Term Aim of AGI

While “doing good” is the usual focus of artificial intelligence, it seems somewhat clear that “finding the good” is a higher long-term objective. I’m sure there’s much more beyond both concepts, but as a human being, the pursuit of what the good is seems to be more abstract, challenging, and ultimately important than executing on feeble and limited notions of the good itself. I’ve written previously about the idea of a moral singularity, and I’ll repeat one of my main ideas here:

As more advanced and varied mental hardware and software comes about, entirely new vistas of ideas and experiences become available to creatures possessing those new mental resources. I will posit that these ideas are often entirely inaccessible to their predecessors. In other words:

  • Rodents aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose chimpanzee moral variations
  • Chimpanzees aren’t capable of imagining, in their wildest dreams, the complex social rules and norms that compose human moral variations

Very few people would disagree with the two statements above. It follows then, that:

  • Humans won’t be capable of imagining, in their wildest dreams, the near-infinite complexities of super-intelligent AI “morality”

To presume that the skull of homo sapiens houses the highest conceivable moral ends would be immeasurably presumptuous. Morality has evolved with intelligence, and if we believe that our goals are higher, more informed, and “better” than those of rodents or chimpanzees, then we should also presume that the goals and moral principles or ethical decision-making of future intelligence will be vastly beyond our own – and maybe very well should be vastly beyond our own.

It is a dangerous idea, because it doesn’t presume human preeminence, and I argue (in the moral singularity article linked above) that drastic changes in moral principles will make for an uncertain place for human beings as they are.

The long-term aim of artificial intelligence isn’t merely to pursue the moral aims of humanity – but given a long enough time horizon – the goal is to explore goodness itself, and how morally worthy aims – in all their complexity – could be pursued and achieved. I would argue that “doing good”, even in the utilitarian sense, is still just the best homo sapiens-level “grasp” of goodness, and eventually will be overcome. Take the two thought experiments:

  • Imagine the absurdity of a world in 2018 where human beings pursue all of their goals with the moral frameworks established by Chimpanzees.
  • Imagine the absurdity of a world in 3018 where superintelligence artificial intelligences pursue all of their goals with the moral frameworks established by humans.

We can understand the incompatibility of the first example because we know what human and chimp morality is like. We cannot imagine what superintelligent AI morality is like, but somehow we often have the gaul to presume that our moral intuition (as disjointed and incoherent as it often is), is the guiding compass for the future trajectory of all sentience, not just humanity. A silly presumption.

By no means am I advocating an eagerness to obliterate humanity for the sake of producing artificial general intelligence, nor am I advocating an eagerness to forgo human moral intuition and hand over the reigns of discerning the good to machines as soon as possible.

Rather, I’m making it clear that in the long-term, should we produce entities with vastly more cognitive capacity, creativity, and wisdom than ourselves, we should presume that they would be able to help us with our greatest task: Namely, determining what the hell we should be doing in the cold, mute universe (read: Ethics).

I explored both “AI for doing the good” and the danger of “AI for finding the good” TEDx presentation for Cal Poly:

 

Header image credit: Singularity Hub