Identity is an incredibly strong force in the human psyche, and it manifests itself in all kinds of social sciences contexts:
Tajfel & Turner’s Social Identity Theory states that we define ourselves by the groups we belong to (“I’m a scientist”, “I’m a mom”, “I’m a patriot”), and we act in alignment with group norms and in-group prototypes.
Robert Cialdini highlights what he calls the consistency principle, where people behave in ways that are consistent with their self-image, especially once they declare it publicly.
Dan Ariely’s behavioral economics work (e.g. Predictably Irrational) shows that our self-concept determines choices, even more than rational cost/benefit analysis.
Determining what actions are “best” for us (or our family, or nation, or species) requires a tremendous amount of calculation, and humans need heuristics to cut down on such heavy (and often downright impossible to quantify) calculations.
Not surprisingly, I have anecdotally found that what people “identify” with (what they believe they “are”) largely determines what kind of future scenarios they consider “good” or “bad”, “preferable” or “undesirable”.
People who identify with their tribe, their species, or with the biosphere alone would naturally view (non-biological) AGI or posthuman life to be a threat to “themselves,” while those who identify more broadly with life itself might see a continued blooming of rich autopoietic, sentient life (in whatever substrates) to be an obvious net good.
In this short article, I’ll lay out how I think this “identity” issue manifests in the global AGI futures discourse, I’ll break down why a long-term shift towards identifying with “life” is best, and I’ll lay out how I suspect we might get more humans to identify with life and hopefully “make the future go well” on cosmic (not merely speciesist) terms.
How Identity Manifests in Futures Preferences
Here’s a graphic highlighting some of the common ways people “identify”, and what it usually implies for how they value humanity and AGI – and what kind of futures they’d consider best:
Most people I speak with “identify” as something wholly incompatible with posthuman life or entities with more than human moral value.
Generally, among my Western friends and acquaintances, I notice the following patterns:
People who deem themselves to be politically liberal tend to identify with “all of humanity” (Anthropocentrism) or with “all biological life” (Biocentrism).
People who deem themselves to be politically conservative tend to identify with “my country” (Nationalism) or “my family or my kind” (Tribalism), and occasionally (but not frequently) with “all of humanity” (Anthropocentrism).
It might sound like I’m singling these people out as being unthinking – but I’m not. This is surely how we all operate, and in nearly any area of life what we haven’t vehemently studied and ardently explored, our beliefs and preferences are simple a function of what our inner algorithm of “what people like me would think / do / believe.”
Its a “vibes” thing. Its too much effort to have to assess these things in depth. Just run the identity algo.
That said, this leaves these two political camps to have a posture of resistance to posthman intelligence, because positive visions of posthuman entities are simply not something in their “what people like me would think / do / believe” algorithm.
It is not that their political leanings or values are actually diametrically opposed to valuing the blooming of posthuman life. Think about it this way:
Conservative ideals of self-reliance and competition are very congenial with the notion of conjuring entities with much more potentia than mankind – I could easily see conservatives latch onto the idea of “let productive competition continue, upward to AGI and man-machine hybrid intelligences!”
^ But since there are no conservative identity-based reference points for talking about posthuman intelligences, instead the conservative reaction is often “Posthuman intelligence is always wrong because it might take our jobs, and it isn’t part of our human tribe, which is what really matters!”
Liberal ideals of one-ness with other cultures and with the entire natural ecosystem are totally compatible with valuing entities with more moral worth and vastly greater powers and volition than humanity – I could easily see liberals latch onto the idea that “The tapestry of life is rich enough to include much more than humans, and we should make sure that nature’s previous blooming process continues!”
^ But since there are no liberal identity-based reference points for talking about posthuman intelligences, the liberal reaction is often “Posthuman intelligence is always wrong because it might harm biological life, which is really what matters!”
The bedrock beliefs of both groups are totally amenable to cosmic moral aspirations, but they have no identity reference points to embracing AGI or posthuman life, so the identity algo says “resist!”
You might argue that conservatives and liberals would resist posthuman life automatically because it simply isn’t in their best interests. They are humans, and so even if AGI had all the morally valuable traits of a worthy successor, they surely wouldn’t want to see it created because such a powerful entity may not always serve humanity, and would likely threaten the dominance or survival of humanity.
But I don’t think that’s true.
There are oodles of examples of “identity”-level behaviors somewhat clearly at odds with the acting person or group:
Plenty of conservatives valued independence enough to not take vaccines even when it would saved themselves by taking it. Some of them have also advocated that creationism be taught in schools, potentially anchoring children to religious tomes as opposed to continuing the progress of discovery that has lifted man’s condition and relative wealth to where it is today.
(I use these examples as representative examples of identity-driven behavior that could be argued to be against the interest of the actors. I have no dogmatic or well-studied takes on COVID vaccines or protecting endangered species. I simply care vastly less about every single one of these above issues than I care about the grand trajectory of sentient intelligence in the cosmos, which is, somewhat obviously, the actual purpose and point of this article.)
Identity is a driver of action beyond what is best for survival.
Determining “what is best for survival” requires calculation of the long-term impacts of decisions. But identity says “all government bad” or “all immigrant good” and it makes action simpler. Presumably, for most of our evolutionary history, this served us well – but its clear that it has its downsides.
The Mandate Towards Posthuman Identity Shift
In a long enough time horizon, I’d argue that we need to shift a large portion of humanity to identify as “life itself,” mainly for the following reasons:
The “eternal hominid kingdom” (EHK) is not a possible, viable future option. Not only are all things destined to be destroyed or or transform (i.e. aiming for an EHK is literally impossible, its shoveling sand against the tide), but just as it would be wrong to have eternally capped the development of intelligence at sea snails, it would similarly be immoral to cap it at hominids.
Make sure we don’t destroy ourselves and/or earth-life
Make sure we determine the traits that make an intelligence worthy (I’d argue sentience and autopoiesis are among the most important)
In order to coordinate at that new higher level, and aide in the continued blooming of life up from humanity, as we are up (in potentia, access to nature, sentience, abilities) from the nematode – we’ll need to have people with cosmic moral aspirations:
Roughly speaking, people who identify with humanity or bio-life are against “cosmic” outcomes – because it doesn’t click with their present identity. It isn’t what “people like them do” (i.e. the identity algorithm people use in their minds to determine what they should do).
But how to people arrive at “identifying with (substrate independent) life itself”?
I’ve found two distinct patterns that have converted people to this perspective, and one seems to be more impactful than the others.
Encouraging a Posthuman Identity Shift – The “Identity Path”
For the first 10 years after I arrived at my worthy successor / flame conclusion (and long before “worthy successor” was the coined term I used), I met maybe three or four total people who genuinely were interested in posthuman futures (not just anthropocentric futures where humans remain the sole locus of moral value and agency in the cosmos).
Over the last 18 months as my content has gotten more popular, and as AGI has become self-evidently closer, I know of maybe 30-or-so people who genuinely understand and are down with the cause of blooming posthuman life.
Most of these people have gone through a process of acceptance (from the Kübler-Ross stages of grief), as depicted below:
The journey that people take to reach acceptance tends to put them into one of two groups:
Group 1: Risk-First Accepters: These people got convinced of AGI risk, and then realized (often to their own horror) that humans almost certainly will attenuate, and that there is no other choice but to ensure that the flame goes on beyond our torch.
Group 2: Nature-First Accepters: Through literature, science, psychedelic substances, or otherwise, these people have come to see life as an unfolding process of which we are part.
From my experience, the Group 1, Risk-First people – who arrive at the idea of the worthy successor as a last resort (because holding onto the human form forever is clearly not possible) – are rarely fully in a place of acceptance. They’re often trapped in loops of bargaining or depression, gritting their teeth that the entirety of the universe can’t be made to serve the happiness and survival of one species of hominid.
Group 2, Nature-First people tend to actually be in acceptance. In Kübler-Ross terms, this means that they’re actually able to think of, and work towards, posthuman futures. They’ve fully integrated the fact that humanity will not last forever – and they’re able to become cosmically aligned and focus ardently on ensuring a positive posthuman future, including:
How to determine, measure, and ensure worthy traits in machines
How to garner international coordination (through whatever means) to avoid birthing an unworthy successor, and to determine and move towards a worthy one
As we enter the dawn of AGI, it is probably beneficial to have some humans completely married to their nation or species, in order to prevent reckless reaching into posthuman intelligence before we actually know what we’re building.
But it certainly behooves us to have a critical mass of people who identify with life itself – especially those involved in creating (AGI lab leaders and staff) and or controlling (IGO leadership, political leaders in important global superpowers).
While a strong foot-in-the-door strategy might involve convincing people of AGI risk and inevitable human attenuation, I suspect that we can’t really work towards positive posthuman futures unless people understand themselves to be part of the greater process of life itself (i.e. unless we get them to identify with life and its continued blooming, not merely with their present temporary form).
Ray Kurzweil’s The Singularity is Near peaked my interest when he posited his reasoning for why there is likely no intelligent life elsewhere in the universe. By a mere matter of…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Ideals Have Taken Us Here Could the yearning to improve the quality and efficiency of our daily human experience also bring us to abandon much of what we consider “human”?…
Last week, I was fortunate enough to catch up with George Mason University Professor, Doctor Robin Hanson, one of the bloggers I admire most in the realm of intelligence and…
In the coming decades ahead, we’ll likely augment our minds and explore not only a different kind of “human experience”, we’ll likely explore the further reaches of sentience and intelligence…
I don’t watch fiction, and I don’t read fiction, almost as a rule. While I respect it as a medium, and consider it valuable in fleshing out future scenarios that…