A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
Over the past 13 years I developed a specific set of moral beliefs that felt like a kluge of what I was reading (Emerson, Bostrom, Al-Rodhan, others).
For a full decade I probably only met four or five people who intuitively understood my position and shared similar beliefs. Then, in interviewing Bostrom again recently, I realized that even he didn’t seem to be championing the belief system I once thought he championed – the one I believed to be important.
But in the last two years as I published more vigorously about these ideas (especially after publishing Worthy Successor) I’ve met – online and offline – dozens of people who share some of my core moral beliefs, many of whom have added luster and perspective on these moral ideas and have inspired dozens of my essays.
But there was no name for this state-space of moral perspectives.
I realized that the ideas being explored was less about one specific interpretation of moral beliefs that I held, and was more about a broader set of moral beliefs that share some important commonalities about exploring and expanding value itself – beyond humanity or even consciousness as we know it.
So, voila, a new term is born.
“Axiological” as in the study of value itself (beyond humans, beyond any particular species or substrate), and “cosmism” in the sense of applying to the entirety of existence (the multiverse across all time, completely beyond earth or humanity or the meagre planes of existence that humans can detect or understand).
A formal definition might look something like:
Axiological cosmism (noun)
I was initially leaning in the direction of potentism (for Spinoza’s potentia), but was talked out of it because (a) my smart friends convinced me it was weird to pronounce and didn’t seem philosophically rigorous, and (b) I realized that potentia is especially crucial in my interpretation and perspective on axiological cosmism (AC), but it isn’t necessarily core to the entire moral “space” that AC represents.
There is no single version of axiological cosmism, and different thinkers might have interpretations and preferences for different aspects of the core philosophy, but they fundamentally agree on a few core tenets.
Axiological cosmism-inclined thinkers may more or less agree on the core tenets, but differences may arise with regard to any number of factors or features of the philosophy or its application.
Examples of different thinkers with specific preferences and interpretations of axiological cosmism:
Daniel Faggella‘s AC ideas have a specific flavor, generally:
Michael Johnson‘s ideas tend to have a different emphasis:
While the “flavors” of these interpretations differ, they are all still recognizably positions within the broader category of Axiological Cosmism, and they reflect the world-models, underlying beliefs, and focus areas of the respective thinkers involved.
While different AC thinkers might have different preferences and takes, there are many common moral beliefs related to AGI or posthumanism that simply don’t fit within axiological cosmism, including:
Here are a few visual representations of how Axiological Cosmism differs from other current philosophical and moral positions.
Here’s AC compared with relativism and utilitarianism across a number of moral criteria:
Axiological Cosmism, Utilitarianism, Relativism – Compared | |||
Stratum | Relativism | Utilitarianism | Axiological Cosmism |
Core Moral Principle | Morality is dependent on cultural norms or individual perspectives, not universal principles. | Maximize happiness/utility for the greatest number of people or sentient beings. | Expansion of useful capacities (consciousness, and all possible powers that permit something to persist). Focused on unpacking new value. |
Moral Priority | Varying by culture or individual; no absolute universal priorities. | Maximizing happiness/utility in the present moment (typically for the greatest number). | Expanding consciousness and potentia – maximally ensuring survivability and exploring all value, even beyond sentience. |
Goal of Moral Action | Maintain coherence with social norms or personal standards. | Optimize outcomes for the greatest happiness and utility, typically balancing pain and pleasure. | Expand sentient minds and their capacity to unfold more value and power (Potentia), and don’t let life itself go extinct. |
Potential Motto | “Right and wrong are matters of perspective.” | “The greatest happiness for the greatest number.” | “Expand the flame of consciousness and potentia, and ensure the flame doesn’t go out.” |
Here’s a visual representation of some of the “crux” issues where AC specifically diverges from relativism and utilitarianism:
Axiological Cosmism, Utilitarianism, Relativism – Differences and Similarities | |||
Criteria | Relativism | Utilitarianism | Axiological Cosmism |
Is Value Independent of Any One Species (Humanity)? | Yes | Yes | Yes |
Is Consciousness (Especially Positive Qualia) Valuable? | Not inherrently, no. | Yes | Yes |
Is There More Value to be Unlocked That Humans Have Yet to Discover? | No, its always relative, forever. | No, only qualia matters. | Yes, there’s likely more to unlock |
Is this Moral Theory Intended to Be Overcome / Surpassed? | No, its always relative, forever. | No, only qualia matters. | Yes, new, better theories may arrise as potentia expands |
We might have listed other moral philosophies here (Kant’s categorical imperative, virtue ethics, etc), but those moral beliefs seem to be less common in current AGI and posthuman futures discussions, so I chose relativism and utilitarianism instead.
If there was a term for axiological cosmism already, I would have just used that. But it didn’t exist.
The purpose of laying out this term is to give a name to an identifiable constellation of meta-ethical stances that share some important traits (laid out above).
I have my own “flavor” and focus on AC, and my fellow AC-aligned thinkers have their own. The aim isn’t to create a specific narrow lens where everyone agrees – but to create a place on the map of moral cartography where like minds can explore what I (and, fortunately, many others) think are the most important questions.
At the dawn of artificial general intelligence, it behooves us to consider the massive scope and scale of the moral considerations of minds vastly beyond our own, and to look squarely at the situation we find ourselves in:
I’ve written extensively for years about the importance of creating (Worthy Successor, Blooming vs Servitude, Against Anthropocentrism), but until now, these terms scattered articles and their cluster of moral beliefs didn’t have a name.
if you were going to name the 3rd thing what would you call it? pic.twitter.com/zziQJjrAsw
— Daniel Faggella (@danfaggella) May 3, 2025
Now, tentatively, we have one.
When Potentia and Worthy Successor were first published, a lot of birds of a feather came out of the woodwork, and added a lot of richness to the broader dialogue of pursuing trajectories of posthuman value.
The hope is that axiological cosmism will be able to do that in an even more rigorous way, leaving even more room for ideas to bloom.
…
I’d like to give special thanks to my good friends Ginevra Davis, Michael Johnson, and Duncan Cass-Beggs for contributing their ideas in how to formalize our shared set of moral beliefs, and for helping to decide on axiological cosmism as the term worthy picking.
Header image credit: Apod GrAG
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
Before Wendell Wallach’s present position as Lecturer at Yale University’s Interdisciplinary Center for Bioethics, he founded two computer consulting companies. He’s the author of Moral Machines: Teaching Robots Right From Wrong (Oxford University…
1) Augmentation is nothing new Until recently, “augmented reality” seemed to only have a place in video games and science fiction movies. Though at the time of this writing, “AR”…