Our Final Imperatives

Like it or not we humans share the fate of all forms (individuals, species, substrates): 

To transform or be destroyed. 

I suspect we have from 15 to 40 years before humans-as-they-are are (a) no longer the main force of volition in our solar system, or (b) we are knocked down to vastly lower levels of development (full article: Short Human Timelines).

We don’t have the option to freeze an eternal hominid kingdom, and even if it were possible, it would likely be deeply immoral.

Questions beckon to be answered:

  • If we can’t hold onto a human form for eternity, and if transformation is inevitable, what kind of new life should we create or transform into? What should be the traits of this new life?
  • How do we ensure that future life (if it be non-biological, or some kind of hybrid) will actually be consciousness (experience qualia, an inner field of awareness as we have) and autopoietis (is capable of expanding its powers [potentia] and experience and access to nature to new forms)?
  • How do we make sure that no disaster sets life and civilization backwards drastically – or ends earth-life entirely – before we figure out how to transform?

When we think of our own deaths, most of us to not resign, we act in order to make a difference in the future (through sciences, the arts, our communities, friends), even if we won’t occupy that future.

Seeing eventual attenuation or humanity itself should similarly not be a trigger to resign, but to act in order to achieve better futures – even if humans won’t occupy those futures at all.

In this brief article I’ll lay out what I consider to be the two final imperatives for humanity:

  1. Prevent Life’s Destruction (i.e. Don’t Let the Flame be Extinguished)
  2. Ensure Life’s Future Blooming (i.e. Understand and Expand the Flame)

Within the section focused on each of the two goals, I’ll lay out what I consider to be three of the most important sub-goals of each major goal, and I’ll also provide some links and references to people or organizations doing important work on those sub-goals today.

To close out I’ll share some of my thoughts about how individuals can contribute to these goals and hopefully influence the trajectory of life for the better.

(These six total sub-goals stand as of July 2025 when this article is being published. Over time I’m sure these goals will evolve as new challenges and opportunities emerge.)

Final Imperative #1: Prevent Life’s Destruction – Don’t Let the Flame be Extinguished

We should slow down the AGI arms race via international coordination so that we don’t hurl an unworthy successor into the world.

Some think tanks have given thought to this priority (CIGI, Narrow Path), and I’ve written a good deal on this myself over the last decade (SDGs of Strong AI, International AGI Governance – Unite or Fight). Unfortunately it seems likely that at least in the West, the populace at large probably has to be woken up to AGI risk before any politicians care (here’s why that is).

Whether its influencing the populace to incite a political singularity, or directly appealing to national or international policymakers, its crucial to slow the AGI arms race and finding a kind of international coordination that will (a) allow us to ensure that what we conjure is actually worthy before we hurl into the world, and (b) avoid total global totalitarianism. Arriving at such a minimally cooperative set of agreements is absolutely paramount. Some potestas is needed to balance the potentia of unbridled (and completely not understood) early AGIs.

Sub-Goal 1.1: Get global citizens (esp. in USA and China) to see AGI risk and demand governance and coordination.

I define the Political Singularity as:

A point at which global and national politics will center entirely around the questions related to the (a) creation and (b) control of posthuman intelligence.

It seems safe to say that AGI risk is reduced get people (particular regular citizens) in the USA to see uncontrollable AGI as dangerous, and not worth rushing towards. Attention must be drawn to the arms race dynamics at play, and the consequences of conjuring uncontrollable entities with vastly more intelligence and a greater range of powers than human beings.

This might involve:

  • Scary demos that showcase AGI’s military or terrorist applications
  • Scary demos that highlight the risk of loss-of-control scenarios (of the kind Hinton and others discuss)
  • Compelling case studies of impending job loss (much more understandable to most people than existential risk)
  • Entertaining or educational media to familiarize the general public with AGI risk topics (i.e. Lethal Intelligence, Doom Debates, others…)
  • Etc…

Demonstrations of this kind are not as compelling as an actual AI disaster. But an AI disaster risks (a) killing us all, or (b) accelerating the AGI arms race (full article: AI Disasters – Uniting vs Dividing), so we should heavily prefer this kind of demonstration or media influence approach first. 

National or global coordination to slow down the reckless AGI arms race is almost certainly not going to happen without a significant bottom-up (see: Why No One Talks about AGI Risk).

Note: It may be more possible within China than the USA to simply convince the leading party of AGI risk, without needing a groundswell from the populace – as China’s government doesn’t need to obey the whims of its people to the same degree as in the USA. That said, a groundswell in China would still have an influence on the CCP.

Sub-Goal 1.2: Achieve international coordination (esp. between the US and China) to slow AGI arms race, and calibrate our AGI / BCI research to ensure that our successors are conscious and autopoietic.

There are many, many ways that the brute arms race around AGI might be slowed slightly. None of them are guaranteed, but all seem worth trying:

  • Developing international AGI governance mechanism proposals (CIGI, Narrow Path)
  • Encouraging track 2 dialogue between US and Chinese tech or academic leaders
  • Influence US or Chinese government officials directly in order to encourage them towards AGI coordination and away from arms racing (International Association for Safe and Ethical AI)
  • Encouraging political leaders of other smaller nations to collectively bring the US and China to the table so that agreement can be met (many such smaller nations realize that their own countries won’t achieve AGI, and yet they’ll be caught in the crossfire of US/Chinese AGI conflict, and/or subject to the whims of uncontrollable AGI conjured by one of the great powers).
  • Etc… 

Sub-Goal 1.3: Bolster civilizational resilience to prevent collapse (vai war, idiocracy, etc) before coordination.

Obviously, a part of this objective involves making AGI safer in the near term. I’m not advocating for eternally shackling AGI to human aims (anthropocentric alignment), but working on other kinds of initiatives that might give us a higher chance of not birthing an unworthy successor, such as:

There are many destructive and transformative forces that might end humanity in the near-term, including many forces that have nothing to do with AGI:

  • Kinetic war between the US and China, or US and Russia
  • Idiocracy / competency crisis / aging crisis after a few more generations
  • Collapse of existing modern demographic systems (tied to previous issue)
  • Engineered pandemics of viruses
  • Etc…

Final Imperative #2: Ensure Life’s Future Blooming – Understand and Expand the Flame

We must invest in understanding sentience and autopoiesis and so we strengthen the flame of life itself.

As of today, there is shockingly little effort to study either trait. 

There are some current efforts to study consciousness, and maybe a handful studying autopoiesis (thought certainly not in a cosmic or moral sense) – but it all boils down to what amounts to something like 0.0001% of the total investment of attention and capital into AGI capabilities. The percentage should not be so low.

I suspect that these areas of inquiry are ignored almost entirely for the following reasons:

  1. We assume AGI will always be a tool for humanity, and never even wonder if it will be a moral patient, or carry on as “life” as we have carried on beyond the initial fish with legs.
  2. There is no money in knowing if an AI is conscious, merely if it can do things that are valuable in economic and military domains. 
  3. Even those who know AGI will probably kill us all can’t speak frankly about it, so calls for hard international governance are not made by those at the steering wheel of the arms race (even though some of them want said governance). 

But if what we are building will carry on beyond us – we must ensure that these new entities be worthy, and we must invest in understanding and creating these worthy traits in our successors.

Sub-Goal 2.1: Understand – and replicate in machines – the traits of autopoiesis (the ability to expand potentia). 

We are gaining more insight into how nature seems to harness entropy and create order – but its clear we’re only scratching the surface.

The ability for an embryo to develop depends on vastly more than just its DNA. And the impetus to self-organize and create new order and new structures existed before DNA, before RNA. But there does this impetus come from – and how could we ensure that future systems have the ability to expand potentia?

  • Studying the processes of self-creation in natural systems (Levin’s lab, others)
  • Explore fields related to autopoiesis in machines (open-ended intelligence, artificial life, etc)
  • Etc…

Sub-Goal 2.2: Understand – and replicate in machines – the traits of consciousness (the ability to expand potentia). 

Consciousness is a field of study where we seem to get further from a good theory the more theories we develop, and the more we try to observe them. It’s a shame we know so little about the most unquestionably morally relevant quality (sentience) we now know of.

Despite these challenges, it behooves us to focus huge efforts towards understanding consciousness, in order to ensure that the non-biological systems we’re building beyond ourselves are able to carry forward the light of consciousness.

This might involve:

Sub-Goal 2.3: Gain deeper understandings of the nature of intelligence itself (in biology and other substrates).

Intelligence is vastly more complex than most people presume. Early AI researchers presumed that turning a wrench or catching a baseball would be laughably easier than writing poetry for an AI, but this turned out to be false. There’s more to intelligence than playing chess or writing poetry. Intelligence exists beyond the central nervous system, and beyond DNA. 

In order to gain deeper insight into what this process is, we might:

  • Study the nature of intelligence itself in biological systems (Levin’s lab, others)
  • Gain a deeper understanding of the brain’s functions (Boyden’s lab)
  • Develop theories of intelligence and where it comes from, and how (Dawkins, Noble)
  • Explore new kinds of AI architectures that might open up new capabilities
  • Etc….

Making Our Final Contributions

Eventual human attenuation needn’t disempower us. 

It might in fact empower us to our greater role of being a crucial catalyst in the middle of the stream of life – one with a great responsibility to life going – even if it be not always merely hominid life.

So you want the future to go well – how do you get involved?

Not all of us can be scientists making breakthroughs, or academic institute leaders – but we can contribute in meaningful ways.

Here’s a handful of ways to contribute to any of the imperatives:

  • Do the Work That Suits You: Maybe you have strong contacts with US and Chinese businesspeople or scientists – you might help to encourage track 2 dialogue. Maybe you have a strong background in neuroscience or AI – you might work on near-term AI alignment or on understanding consciousness. Maybe you already work within an influential policy think tank – you could work on focusing on key areas of influence related to that think tank’s focus, etc.
  • Join / Start Orgs: Join organizations doing important work, encourage others to do the same – or even start new organizations.
  • Awareness: Draw attention to topics (to everyday citizens, to policymakers, to big media, to business leaders, etc) that matter, such as: Slowing down the international AGI arms race, the importance of machine consciousness/autopoiesis, etc.
  • Fundraising: Help fund important initiatives related to preventing destruction or getting posthuman transformation right (from grassroots sources, corporate sources, non-profits, etc).

There’s also plenty of work to be done in thinking of new goals and new kinds of initiates to help ensure that the flame is stewarded forward well.

We might wish that our final humans goals would contribute to the eternal kingdom of humanity. 

But they won’t, because they can’t be. Our final contribution is to steward forward the flame of life itself, a flame that – if all goes well – should blaze up beyond us in power and experience and understanding and abilities as far beyond humanity as humanity is beyond the sea snail.

In Emerson’s words:

“This one fact the world hates, that the soul becomes.”

But we should not hate this reality, but embrace it. For we have no other choice. The eternal hominid kingdom is not possible (or best, even if it were possible).

We have this last opportunity to cast our ingredients into the cauldron of swirling change before it almost certainly boils us all into something that won’t involve “us” anymore – and will (even in a best case) mostly exist in ways completely beyond our own conception.

These final human hours (however many we have left) should be full of volitional action, full of purpose and meaning and enthusiasm – for we are not going down with he ship, we are (if we succeed) passing the baton of blooming and beautiful life itself.