Nick Bostram on Taking the Future of Humanity Seriously

In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes:

Traditionally, the future of humanity has been a topic for theology.  All the major religions have teachings about the ultimate destiny of humanity or the end of the world. Eschatological [a branch of theology concerned with the final events in the history of the world or of humankind] themes have also been explored by big-name philosophers such as Hegel, Kant, and Marx.  In more recent times the literary genre of science fiction has continued the tradition.  Very often, the future has served as a projection screen for our hopes and fears; or as a stage setting for dramatic entertainment, morality tales, or satire of tendencies in contemporary society; or as a banner for ideological mobilization.  It is relatively rare for humanity’s future to be taken seriously as a subject matter on which it is important to try to have factually correct beliefs.

Looking at it from this point of view, it does – at least in some way – seem odd that given the speed of technological development, the future of humanity is still thought of as a topic primarily for science fiction, rather than a necessary focus of the world’s leaders

What I’ve often found to be the case is that many of the seemingly distant notions of what “could be” seem to be regarded similarly to walking on the moon would have been regarded in 1920. Absurd and essentially impossible. But we done up and walked that bad boy – a WHILE ago – and our general bias seems to be – despite the speed of progress today – that the general conditions will endure and tomorrow’s possibilities will not too far extend those of today. I’ve mentioned in one of my recent TEDx talks that today seems to be the worse time to intellectually hide under the rock of “it hasn’t happened yet, so it never will.”

In his article, Nick refers to four different “types” of futures of humanity:

  • Extinction
  • Recurrent collapse (Building up to a specific “level” of technological development before systems fail on a massive scale and we must begin almost all over again)
  • Plateau (Where essentially the speed of progress is halted by human resistance or by seriously complex issues which prevent posthuman development – which Bostrom poses as very unlikely)
  • Posthumanity (Defined below)

An explication of what has been referred to as “posthuman condition” is overdue.  In this paper, the term is used to refer to a condition which has at least one of the following characteristics:

  • Population greater than 1 trillion persons
  • Life expectancy greater than 500 years
  • Large fraction of the population has cognitive capacities more than two standard deviations above the current human maximum
  • Near-complete control over the sensory input, for the majority of people for most of the time (similar to my idea of the “Epitome of Freedom“)
  • Human psychological suffering becoming rare occurrence
  • Any change of magnitude or profundity comparable to that of one of the above

This definition’s vagueness and arbitrariness may perhaps be excused on grounds that the rest of this paper is at least equally schematic.  In contrast to some other explications of “posthumanity”, the one above does not require direct modification of human nature.

Looking at a potential definition of posthumanity, is interesting to see a specific set of criterion in place that don’t necessarily involve us becoming a single glob of computational substrate – although it certainly doesn’t exclude that possibility. The individual criterion themselves bring to bare the question of the how we’ll make that transition to posthumanity.

I think of the effective (by this I mean, beneficial to sentient beings) transition to posthumanity similar to the attainment of any goal – in that it likely involves the elements essential to attaining any goal, two of which being:

  • A higher but distinct vision (sometimes concrete, sometimes less concrete) of the objective of the goal attainment process
  • Specific steps identified for it’s attainment, often involving “phases” in addition to individual steps

In my opinion, seriously considering the highest vision of posthumanity, and considering the particular “phases” used to attain it will be tremendously useful – and maybe the most morally worthy philosophical investigation of all time.

I believe it is important to not only envision the phases we’ll use to transcend our current human condition, but to also continually refine the VISION that this might be ultimately heading towards.

The vision might be anchored in a core Utilitarian or Epicurean belief system, where it is possible that the future will look like an infinite number of sentient, conscious computers experiencing the highest levels of pleasure imaginable (beyond human senses or imagination), with an additional system in place to enhance and replicate all of these computers.

Or, the vision might end up being anchored more off of the Transcendental (aptly named) vision, or something anchored in some notion of the notion of “Bildung” – with the purpose of the super-intelligence to be a continuous and urgent expansion beyond it’s current capacities – becoming infinitely larger, more versatile, and more connected.

Though the “vision” that the future of humanity will almost surely surpass any current human philosophy in it’s depth and complexity, it’s important for use to note that our original approach to “where we take this thing” will almost undoubtedly have massive consequences on where “this thing” ends up.