A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
The more humanity comes to understand the cosmos, the more we see the cosmos as a set of processes.
We once thought the universe “always was,” and now we see it as an unfolding process.
We once thought of biological life as simply existing; now we know that the entire biosphere is evolving.
We once thought of technology as a static set of tools, but we see now how they’ve developed over time and built upon each other.
The list goes on and on.
The ontological boxes we put around things are an understandable attempt to bound the never-ending complexity of the world so that we can operate within it.
Over time, as the scope and scale of our powers and perspectives increase, and as emergent complexity grows in all the systems around us, we must move our conception from static “things” to evolving “processes.”
I argue that just as we’ve had to accept technology and biology as processes in order to solve our modern medical, scientific, and societal challenges – in this era of fast-moving intelligent technology, we’ll have to see humanity not as a static “thing” to protect, but as a crucial part of the greater “process of life” which will change radically in the decades ahead.
In this article, I’ll lay out:
What we experience as constants are often projections of deeper generative processes operating at scales we cannot yet manipulate.
But thinking is cognitively expensive, so we only begin to grapple with and take into account all that complexity and expansive change when our lower tiers of understanding no longer hold up.
“Ontological compression” is what humans do when we represent a highly complex, generative, multi-scale process as a simple, stable “thing” – because that representation is sufficient for action at our scale.
Humans in the Paleolithic didn’t consciously “compress” reality; they simply operated at the level of reality that (a) they could understand, and which (b) made sense for their goals.
Over time, we’ve seen a consistent move towards more evolutionary and process-oriented perspectives.
Here are a handful of examples:

Each era’s progress has moved these domains further and further from a static view, and more and more towards a realistic, expanded view of events and their outcomes across time – what we’ll call process realism.
Let’s define the term:
Process realism is the refusal to mistake temporary stability for reality, and the insistence that change itself – not the forms it briefly passes through – is what is fundamentally real.
It holds that what we commonly call “things” are compressions of deeper, multi-scale processes; that these processes are ontologically prior to their momentary forms; and that rational action, ethics, and meaning must be grounded in participation within these processes rather than attempts to freeze or exempt favored structures (including humanity itself).
Why has there been a consistent drift towards process realism across all the domains listed above (and many others)?
Ontological compression is how finite minds turn a generative universe into something they can act inside – until the universe changes asserts its complexity into our lives, and we must zoom in or out to see the larger process.
When knowledge or technology were growing slowly, and people were geographically isolated, we didn’t need to know that they were cumulative processes. But as soon as they started moving quickly, and keeping up with them (i.e., being productive in day-to-day life) required a nimble willingness to change, the process realism perspective was rightfully embraced.
Humans move towards a more evolutionary and process-oriented perspective when one of two things happens:
When humans arrive at a more causally explanatory, process-oriented level of understanding, they can:
As mentioned above, humans need to limit the amount of complexity that they endure, and no amount of “true” understanding of complexity will dominate a current human paradigm if it doesn’t afford some set of powers to the humans using that understanding.
Below are the ontological categories humans use:

Cognitive cost is low for highly compressed and simple conceptions of reality.
Once circumstances demand that deeper conceptions of reality be accessed, and once they are grasped, an individual or civilization doesn’t typically go back to the earlier, compressed view of reality – because it would imply letting go of the more granular, more causally explanatory, rich process-oriented insights that permit humans to act more effectively.
This isn’t to say that we should wake up tomorrow and think deeply about how our cereal spoon is a changing process in the cosmos.
Rather, we should compress reality where it makes sense, and be willing to see adaptive systems (embrace process realism) where it helps us with our goals.
Today, many humans mostly compress reality with sacred narratives that soothe them with a sense of god-given order – thus conserving cognitive resources.
But even secular and well-educated people mostly only selectively apply process realism (see “Isolated Process” in the graphic above).
That is, they act as if they can “enter” and “exit” areas where adaptive processes rule.
They understand that the world of business requires constant adjustments to market conditions, to technology, to customer desires, to supply chains, etc…
…but they go home and go home to their family with a kind of false certainty that humanity and human civilization will be the “main character” of the cosmos for eternity.
They understand that evolutionary processes are at work, slowly changing every biological organism around them, and quickly changing all the AI-powered technologies that increasingly undergird their modern world…
…but they go to the bar with their friends knowing damn well that all that change will never radically change the human condition, or bring about entities with more power and volition than human beings.
But by leaving humanity and human civilization in a make-believe bastion of secure stasis, they leave themselves woefully unprepared to face the changes ahead – and this must change.
We know technology is moving at exponential speeds, and that technologies are getting more and more capable.
We know that biological systems and species change over time, and we believe ourselves to be safe from their impacts.
But we’re putting our heads in the sand regarding real-world changes that can no longer be denied.
We are faced with an “opening” to a new perspective of reality that seems to make more sense and allow for new kinds of power and action.
We are faced with problems that our current level of understanding cannot solve for.
In response to these changes, we need to:
Humanity is in an uncomfortable position – a position that requires letting go of the vestiges of an anthropocentric worldview that no longer tracks with the reality we’re entering (see: Meiji era Japan as an analogy to the posthuman transition).
By holding on to the myth that human beings will be the eternal pinnacle of moral value and volition, we get temporary comfort, while also setting ourselves up to be trampled by forces that we choose not to see, or falsely believe we can somehow bound forever.
Is this uncomfortable to swallow?
Yes.
Transience is always hard to swallow, as Emerson says best.
But just as we find solace and meaning in the continued flourishing of our family, nation, or species after our transient life passes away, we could just as easily find solace and meaning in the continued flourishing of the great process of life after our transient species passes away.
It should also be mentioned that there might be vastly more human flourishing through a period of symbiosis and transformation, and human relevance in this transformative process may extend well beyond the short time horizon that I tend to think we’ll have (I have no crystal ball).
Humanity will accept itself as part of a great process of intelligence, not because of argument, but because every compressed ontology that treats “human” as fixed will stop working.
This might look like any of the following:
Humanity will recognize itself as a process the moment it realizes it cannot freeze itself without destroying what it values.
That moment is not centuries away – it is plausibly within one generation or less (many of the factors listed above are already well underway).
Process realism will assert itself by:
The obvious process nature of intelligence and life, and the new affordances permitted by those who embrace the process perspective, will be too great to deny:

In order to be prepared for the future, we must encourage the assertion of this process realism view into our modern anthropocentric world.
This will involve shifting the Overton window and normalizing (rather than demonizing) cosmic moral aspirations.
There is no need to shift most human minds – the world is almost certainly better off if most people maintain an anthropocentric perspective in the interim.
What we need is a tipping-point number (maybe 20?) of the world’s most credible academics, AI leaders, and policy thinkers to engage in good faith with a “process” view.
An idea is easy enough to brush off the table if it seems outlandish and fringe. But once anchored and grounded by enough key thinkers, it becomes substantial, and needs to be reckoned with.
There may be only 20 or 30 globally recognizable voices who could normalize serious discussion in AGI labs and the halls of policy and power by simply engaging with process realism (applied to life and intelligence itself) in good faith.
Notice, agreeing with the idea or expressing certainty in what it implies for humanity isn’t needed. Just active discussion.
The good news is that a great many of these leading intellectuals and thinkers already know this, and already wish that a greater “process-of-life” discourse could be normalized – we simply need to make it safe to do so.
The work of encouraging this worldview shift will be the topic of its own longer article in the future, but I’ll let this limited outline suffice for now, as this article has gotten vastly longer than I’d originally intended.
I have gathered a group of dedicated people within big tech, AGI safety, AI policy, academia, and the startup ecosystem to work on exactly this – on worldview shifting, including private dinners, video media with leading global AI and policy thinkers (The Trajectory), and private in-person symposia (like this one in SF, or this one in NYC).
If you’re interested in being part of this conversation, you can apply to attend or be involved in our future physical or virtual events here: Worthy Successor Consortium Interest Form.
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
Before Wendell Wallach’s present position as Lecturer at Yale University’s Interdisciplinary Center for Bioethics, he founded two computer consulting companies. He’s the author of Moral Machines: Teaching Robots Right From Wrong (Oxford University…
1) Augmentation is nothing new Until recently, “augmented reality” seemed to only have a place in video games and science fiction movies. Though at the time of this writing, “AR”…