Ontology of Thought

Almost everything on DanFaggella.com is based on exploring, testing, and challenging my own core objectives, ethical principles, and predictions about the future of technology and intelligent life.

Below is a brief attempt to summarize my own ontology of what I consider to be my most important themes of thought. I’ve organized the content on this site to tie directly to the themes below, with the intention of using this structure for challenging and building on the ideas over time.

Core Objectives:

What I aim to do with my life and business. What I believe the highest “good” to be.

1 – Facilitating Collaboration Around an Inevitable Post-Human Transition

Encouraging an open-minded, well-intended, well-informed conversation about where we are steering intelligence itself, and what that means for all current and future life forms.

2 – Discerning the “Good” Itself with Greater Intelligence and Sentience

“Goodness” has evolved as consciousness and intelligence itself has evolved, and as circumstance has changed. The trajectory of intelligence itself will likely be discerned by intelligent entities well beyond ourselves, just as written language and the internet was discerned by intelligent entities beyond chimpanzees.

3 – Improving Sentient Experience (Utilitarian Good)

While there are too many moral theories to even list, it seems that positive conscious experience (gradients of pleasure) and gradients of negative conscious experience (gradients of pain) are the most morally weighty forces we now know of. Roughly speaking, this is a core tenet of utilitarianism. We should optimize the positive and limit the negative (now and into the future) whenever possible – though we should also recognize how fallible our estimates (our “utility calculus”) probably is.

Core Ethical Principles:

I believe these principles are useful in exploring “goodness” itself, and having a better chance of arriving at it.

1 – The “Good” Must Be Explored

Goodness itself seems to be contextual. If we aim to behave morally – to “do good” – then it seems we are obligated to explore goodness with even more intelligence than we have been able to approach it with until now. If we seek ways of doing and being to maximize “good”, it must be explored and expanded with greater minds than our own.

2 – Consciousness is What Counts (for Now)

The sentient (conscious) richness and depth of an entity is generally a good measure of it’s moral worth. The total sentient impact of an action or event is it’s moral worth. At present, consciousness itself seems to be the only barometer of moral relevance we have, and consciousness itself warrants much focus and exploration if we aim to “do good.” Future super-intelligent entities will likely be able to discern something deeper or more nuanced than our current idea of “consciousness”, but for the time being it seems to be the best barometer we have for moral value.

3 – Amoral Solipsism as the Human Condition

Human beings act in their own self-interest, and their own experience is all that they know. Collaboration or concern for others happens when incentives allow for it (example: common enemy) or biology dictates it (example: mothers protecting their children), and assuming altruistic action otherwise is generally a mistake. To collaborate and determine a path forward for our species (and intelligence itself), we will need to come to grips with this reality, and determine governance methods that take this selfishness into account in aligning our incentives and brooking the natural psychological forces that lead to violent conflict and raw malice.

4 – Morality as Arbitrary and Contextual

It seems unlikely that moral “rules” will be found in the cosmos itself. It is more likely that we will explore “modes of being” and “modes of being together”, and will calibrate these modes for the welfare of sentient life. Morality is probably arbitrary and contextual (no matter how much intelligence you have to consider it), it meets the goals of groups of sentient life. Accepting this and evolving morality along with intelligence is something we should accept earlier rather than later.

Grand Predictions:

How I believe the post-human transition will go down, and what we might do about it.

1 – Inevitable Transhuman Transition (“Going In”)

It is likely that in the years ahead, the digital world will become more important than the physical world, as we begin working, communicating, and living in an increasingly virtual experience. Human mental enhancement (transhumanism) will create massive pressure for humans to “go in”, because the physical world would quickly be full of physical conflict if divergent and different varieties of consciousness exist in the same physical reality. Rather, this evolution of human sentient experience will happen in digital and virtual worlds.

2 – Controlling Intelligent Substrate is the Key to All Power (“Who Owns Reality?”)

It is likely that as human intelligence exists more and more in virtual environments (and more and more capable artificial intelligence is developed) that all competition for power (military, cyber, or economic) will ultimately be about controlling the computing substrate that houses the human minds and AI. Controlling that substrate will be a position of unimaginable power and security, and it is the probably highest imaginable dominance hierarchy for entities on Earth.

3 – Global Intelligent Tech Governance

It is likely that the global nature of our shared problems and opportunities – combined with globally decreasing xenophobic fears – will lead to an increasing pull for a capable global governing force. Resistance to the trend towards global governance is likely to be a source of great conflict (likely military conflict). Divergent opinions about whether or not to develop post-human intelligence will also likely be a source of international conflict.

4 – Humanity is Digitized and Digested

It is likely that (if we don’t nuke ourselves to smithereens) we will gradually transfer more and more of our lives and experiences to digital substrates. Given a long enough time horizon, the super-intelligence that manages this mind-hosting substrate will find better uses for computing resources than hosting the consciousness and memories and experience of little humans, and in time no meaningful shred of humanity will exist. I don’t consider this to necessarily be a preferable end-game for humanity, but it’s the situation that I currently consider to be most likely.

Grand Challenges:

Issues that I believe are the most pressing and morally important concerns for our species in the 100 years ahead.

1 – Creating or Enhancing Consciousness (i.e. Moral Value Itself)

We will have to grapple with the fact that “enhanced” human beings may have more power and even more moral value than un-enhanced humans. If consciousness itself can be replicated in a machine, we will have to grapple with the moral value of that created entity – and when it is or is not “right” to create or expand conscious life through enhancement or AI.

2 – Managing the Transhuman Transition

As humans begin merging their minds and bodies with technology, and as the digital world becomes more ubiquitous than the physical world, we will need to explore ways to ensure peace and concord as new varieties of intelligence emerge with and through humans.

3 – Explore End Games and Unite on a Path

Humanity will need to set some kind of moral “North Star” in terms of an objective to aim for. There are a near-infinite number of post-human conditions we could imagine – some more preferable than others. In order to secure the safety of our species we’ll need to determine the “possibility space” into which we’d like to navigate our species.