Conflict, Fecundity and the Birth of Artificial General Intelligence

Can we do better than the state of nature?

Or, rather, can we do better than nature herself?

When it comes to the trajectory of intelligence itself, some humans seem to think that we can. From Goertzel’s ideas about decentralized artificial general intelligence to Omohundro’s Safe-AI Scaffolding to Stuart Russell’s Human Compatible and beyond – many of the efforts of intellectuals in the AGI/transhumanism space seem to believe that there are actions humans can take

Ultimately, the arguments around how hands-on we should be with AGI are arguments around which scenario one believes to be most likely. The arguments might look something like this:

  • Unguided is Better: Conflict and fecundity have lead to the most robust intelligences of today, and we can only expect to have a minor (if any) impact on the trajectory of intelligence. Its development will come about from its struggles against humanity, or against other intelligences, or against the inherent challenges to achieving its goals. Hobbling such an entity with a million rules about human ideas of kindness, or about an arguably irrational concern for human life above all others would be much more likely to yield a weaker AGI, one less capable of populating the galaxy.
  • Guided is Better: Conflict and fecundity are the state of nature, but an constructed outside of violent concerns for survival might be kinder and wiser than one whose origins are struggle. If we want a peaceful transition forward in intelligence, we might be able to achieve greater peace by getting on the same page about what it is we want with AGI/cognitive enhancement. Similarly, a state-of-nature-designed AGI might steer itself into dead ends (paperclip maximizer, or AGI with no consciousness), and human guidance will be crucial to possibly having more of what we want from our AGI (happiness, long-term survivability, escaping the heat death of this universe, whatever else).

In this brief essay, I’ll aim to pose a “for” and “against” argument for the idea of humanity guiding the trajectory of intelligence.

This topic is broad, and I don’t expect to do more than scratch the surface.

That said, I hope to further open up the important debate around how and if we might influence the future of post-human life. In my heart of hearts I hope that through an explanation of the positions “for” and “against,” I’ll be able to flesh out better, more robust approaches “for” – but I accept that the arguments against might be stronger.

Argument Against: The Council of the Apes

Up until this point, intelligence and complexity has developed primarily through fecundity and conflict/challenge. Through fecundity new forms and new versions of forms are constantly created – and through challenge or conflict life adapts to develop new capabilities that give it a better ability to survive and reproduce – filling the future with either more versions of itself, or more permutations of its genetic line as it evolves.

If we take the most successful species of bird or frog in a given environment today – and place it in similar environments on earth 1 billion years ago – it’s extremely unlikely that these same species would thrive. The oxygen content of the air, the other competing animals in the ecosystems, the food sources, the fauna – all change, and life adapts and changes with it.

We can imagine a hypothetical council of all of the apes of the world, coming together 7 millions years ago, deciding on how to develop the next level of intelligence. What features should this creature develop? What traits would make it most likely to survive and reproduce and solve high-order problems than the apes of that 7,000,000 BC?

Obviously, this kind of future planning and coordination is beyond the intelligence of apes of that era, but let’s suspect they got together and posited answers to those questions above. They might desire a future entity who could:

  • Find bananas more effectively with better eyesight
  • Eat more varieties of fruits and berries (including those which were abundant, but made apes sick)
  • Run and jump faster and higher to reach food, escape predators, or chase prey

The importance of written language, the importance of cerebral cortex, the importance of developing societies and culture – nevermind manned flight and the internet – would almost certainly never have dawned on apes.

Similarly, one could argue (and I would) that humanity does not know:

  • What “good” we are optimizing for in the first place (i.e. what we want the intelligence trajectory to achieve or optimize for). Not only can we not agree about it (there might be solutions to that problem in the far future, but I suspect they aren’t pretty), we probably can’t conceive of those higher-level ideas of the good, and they may be eternally subjective and shifting.
  • What facets of intelligence are most important (what senses, what cognitive abilities, physical abilities, etc) would best facilitate a higher-order intelligence (I share Yampolskiy’s sentiments about the unpredictability of AI here).

For that reason, one might argue that overtly designing AGI will at best be another version of fecundity, which will be best suited to enter the world of conflict and challenge, and develop the same way other intelligences have developed: By competing to survive and persist.

Argument For: Might as Well Try

There is no way to tell if life on earth would be more or less rich, more or less worthwhile, had a meteor not struck the earth and killed off the dinosaurs 66 million years ago.

There is no way to tell if humanity slowing and carefully calibrating the birth of AGI will produce a better long-term future (whatever “better” means) than if we just went to war with AGI weapons and let conflict determine the winner – as has been the case for literally every species before us.

But this core uncertainty hasn’t stopped humanity from doing what we do best – or rather – what nature does best, though us: To try anyway. For example:

  • We can’t know if the world around us is real – but it’s hard to do anything other than act as though it is.
  • We don’t know if we have free will or not – but it’s hard to do anything other than act as though we do.

Similarly – while we can’t possibly know if our efforts can positively influence the future of intelligence and intelligent life (heck, we don’t know if we have volition in the first place) – we’ll probably find that we can’t do help ourselves but try.

Even some of those often labeled pessimists, from Yampolskiy to Bostrom, presumably publish their work and share their ideas with some notion that it might make a positive impact on how humanity steers its way through the dangerous scenarios ahead.

Maybe it’ll all just be a Sisyphean gag either way. Maybe it’ll be worthwhile. Maybe time will tell. If we have to choose between giving up entirely or taking a swing, we might as well swing.

 

Header image credit: Siege of Akhaltsikhe by January Suchodolski (via Fine Art America)