Nick Bostram on Taking the Future of Humanity Seriously
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
Richard Dawkins’ concept of the “extended phenotype“: Genes influence not only an organism’s physical traits (the traditional phenotype) but also its environment and the behavior of other organisms, extending the reach of genes beyond the individual’s body.
In this article I’ll posit what the extended phenotype of AGI’s might be, and lay out my argument for why I suspect said impact is likely to push humans out of existence.
Examples of extended phenotypes in plants and animals:
The extended phenotype of humans is a bit larger:
And this is all just in the year 2025, a few hundred years after the industrial revolution. Even without artificial general intelligence, another few technological leaps and we can imagine humans on mars, and vastly greater impacts to earth’s atmosphere, fauna, and flora. Unlike beavers or chimpanzees, humans have a lot more extended phenotype to express thanks to technology and cultural progress.
The 3-5% genetic difference between humans and chimpanzees is the difference in extended phenotype of basically nothing (for chimps) and literally the Anthropocene (for humans).
It seems outlandishly likely that an AGI with thousands of times more potentia, intelligence, and capability that humans would have an astronomically greater impact on the environment it influences.
People tend to assume the extended phenotype of AGI will be limited to ideas just slightly greater than the extended phenotype of humans in [insert current year]. Examples:
Whoa, maybe it’ll build even bigger cities, with tons of super large data warehouses!
Whoa, maybe it’ll make big space craft that can fly to Jupiter and back!
Whoa, maybe it’ll create new super-efficient solar panels of geothermal energy technology!
These are childishly limited visions of what the extended phenotype of an entitle a million times smarter than humanity would look like.
AGI‘s “extended phenotype” would be vastly larger than that of humans. It might literally:
People’s imaginations are far too limited when they consider what AGI might do. They can only imagine potentia they know of, and no more.
We know what biological life can do, we know what humans have developed through culture and learning, and we know what technology has allowed for us to do – but we lack appreciation not only for developments in all three of these strata beyond our present conception, but also for the expansion of potentia into wholly new and varied domains of capability, as far beyond our imagination as mars rovers and the internet are beyond the conception of sea snails:
Put another way, we might look at different kinds of qualities an entity might have, and imagine not only new qualities, but extents of existing qualities that are unimaginable:
If AGI operates on levels of reality we understand – maybe as high above us as we are above chimpanzees – we’re almost certainly toast.
If if operates in entirely new dimensions of reality by unraveling entirely new magazines of potentia – there’s very, very little chance that we will (or indeed should) continue to persist as hominids.
I think about it this way:
IMHO:
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around…
I’ve been cajoled into watching Netflix’s Black Mirror, and a friend of mine recommended watching the San Junipero episode next. As I mentioned in my last Black Morror reflection, and I…