Future of Humanity Institute – Dr. Stuart Armstrong

Over the course of my studies in the domain of transhumanism (and particularly the ethical issues and opportunities in this domain), its never too long in my exploration that I run into the Future of Humanity Institute at Oxford again and again. Below is a bit of the smorgasbord of topics we covered in our chat about intelligence, philosophy, and man’s attempts at foreseeing the future:

Anthropic Probability

Anthropic probability is a topic is particular interest for Dr. Armstrong, and a topic I’m surprised I don’t hear discussed more often.

The idea is that there are certain things which could not really be otherwise since we exist to perceive them – and in some way they can come across as surprising, when in fact they MUST be the case for them to consider them.

Dr. Armstrong uses the example of chimpanzees being intelligent. Is it such a surprise that chimpanzees are so intelligent? No, because they come from our own line – in order for us to marvel at their intelligence, they must have been intelligent enough from us to spur forth from them.

Similarly, it has been noted that meteors of lesser and lesser severity have hit Earth over time. We could marvel at how odd that is, but if it were literally ANY other way, we wouldn’t exist as a species to marvel at the fact. The same can be said of Earth supporting life or being at a perfect distance from the sun (IE: “Well YEAH… we EXIST don’t we?”).

From my perspective, this line of very rational (though likely unrecognized) thinking leads us to a perhaps more accurate view of our world as a whole and our place in it. What other worldly occurrences could be no different for us to even reflect on them?

Intelligence and Predictions

One of the most enthralling of the statements made during our talk had to do with Strong AI’s potential threat to humanity. Dr. Armstrong pointed out that with almost any other humanity “Doomsday” scenario – such as famine, lack of water, nuclear winter, or plague – it is probably very hard to kill off the entirety of our species.

We’re persistent and resourceful, and despite any of the above situations, we’d most likely be able to keep a good enough number of us around to continue the species. If a nuclear winter knocked out 90% of humanity, the rest of humanity would probably be able to hang in there and make a comeback.

However, if some form of artificial intelligence were to knock out 90% of the population, the rest of us are probably goners.

Dr. Armstrong points out that the prediction of when an AI that powerful could even exist is a very risky assessment. Even amongst the highest experts, he points out, there is tremendous variability in when such an intelligence might be able to be brought into existence. Dr. Armstrong himself states: “I have 80% confidence that it will emerge between 5 and 100 years.” In this respect, he varies from Kurzweil and other “optimists” who might believe in a more proximal inevitability of such an intelligence.

Dr. Armstrong himself has been appalled by the quality of AI predictions – believing (rightfully so) that dates and numbers appeal to us. The idea of Moore’s Law (the idea that the number of transistors on an integrated circuit board approximately doubles every two years) determining the emergence of AI is – in his eyes – a false association.

In many regards, he believes the transition to posthumanism to be less exciting than many people presume it might be and more “vivid” images of what this future will look like are probably more wrong.

All in all, Dr. Armstrong sees most trends in technology as positive (making us ultimately healthier, wealthier, more peaceful), and that as a species we’re doing alright and likely will continue to do so unless we get slammed with something from left field (asteroid, etc…). He doesn’t believe that many “threatening” conditions – like global warming – will come close to ending humankind.

The Field is Forwarded by Being Less Wrong

I wrapped up by asking Dr. Armstrong how he thought that these very important topics if transhumanism and intelligence ought to be carried forward for greatest benefit. He stated certainly mainly, we need to be aiming for truth and not arguing for a moral point. This is false, disingenuous, and will not move these fields forward in a way that benefits us.

I’m of an identical believe in our being objective, rational, and open on these matters of transhumanism and intelligence. Those tend to be the conditions of positive progress – why not have them in place for issues as important as these?

Food for thought as always,

-Daniel Faggella

– – –

I want to make an extra thank you to Dr. Stuart Armstrong of the FHI for taking the time to catch up and share his ideas. You can learn more about Dr. Stuart here at his FHI page.