Short Human Timelines – Keep the Flame Going When Our Torch Goes Out

Given a long enough time horizon, all things (individuals, species, forms) die out completely or transform into something else. Lucretius and the second law of thermodynamics concur here.

In this article, I’m going to argue that humanity has remarkably short timelines – maybe 1-2 decades at most – to either be destroyed or to transform into something very much other than our present hominid form.

I’ll lay out the destructive and transformational implications of three different forces of change on our current condition:

  • External forces
  • The nature of man
  • The nature of AGI

From there, I’ll address the different possible scenarios that might occur, and how each might impact the length of the timelines for humanity’s destruction of transformation – and I’ll end with what I consider to be our final priority: Ensuring that life and value bloom beyond us into the cosmos.

I wish dearly that humanity had another 50 years to make the hard decisions about what kind of future to build – but I suspect we have no such luxury (see the section below Scenarios Ahead).

We’re building machines that seem rather likely to push us out of existence, while other forces will drive us to augment our brains to keep up with the forces of creative destruction. If the human form is shredded before we can ensure that whatever is after us has those most essential morally relevant qualities, we may end up snuffing out life and value in our light cone. But if we get it right, the entire light cone might sparkle with blazing value.

As it stands, the entire bathtub (of the human form) is getting dumped out the window, and we’d better quickly find the save the baby (moral value) while we have a chance. 

We’ll start with a look at the forces at change that humanity has to contend with:

External Forces

Forces of Destruction:

  • 1.1.1: AGI Misalignment (inevitable or unsolvable)

    – Alignment with human values may be impossible in principle or in practice. It’s rather clear that despite occasionally optimistic statements, no one within the big labs has any idea about how we’d reliably “align” AGI in the long-term. Yampolskiy and others make compelling arguments that an entity thousands of times beyond human intelligence would basically by definition not be eternally “align-able” (and others have argued that shackling such an entity, if it were conscious, would be morally wrong).
  • 1.1.2: AGI Risk from Misuse and Human Control

    – Even if AGI is roughly controllable in the near-term, there are plenty of nefarious uses for vast intelligences – including social engineering, deepfakes and impersonation, potential cyber hacks or crimes committed by sophisticated robots, bioweapons, and more.

    Ample research suggests that techniques that make AGI more controllable can also make it more weaponizable.
  • 1.1.3: AGI Arms Race Between Western AGI Labs (OpenAI, DeepMind, xAI, etc)

    – Competing labs prioritize capabilities over caution. Incentives drive deception, secrecy, speed, and mere lip-service to safety, even when top labs leaders know how dangerous it would be to hurl vastly posthuman intelligences into the world without a real understanding of what these minds are, if they are sentient, and how we might control of live among them. Musk has been on record warning humanity the severe risks of AGI – likening it to summoning a demon, Altman himself spoke openly and frankly about the likelihood of AGI killing us before starting OpenAI, and other founders (Dario, Demis) have made very similar grave warnings – yet all three of them race forward towards AGI capabilities, partner with defense companies, and otherwise clearly do the opposite of what “caution” would imply.
  • 1.1.4: AGI Geopolitical Race (USA vs. China)

    – National security game theory logic accelerates development. Active hawkish AGI-race talk from both the US and China – with talk of a Manhattan Project for AGI (i.e. hurling a not understood, probably uncontrollable super intelligence into the world for the purposes of crushing the adversary) on the US side, with AGI leadership directly in Xi’s sights as well.

    – Treaty enforcement often seems impossible barring a very specific kind of AGI-related disaster
  • 1.1.4: Synthetic Pathogens / Bioengineering

    – AI could accelerate bio-design beyond human oversight or comprehension, and potential malicious agents could use early proto-AGI to potentially develop many new kinds of pathogens, as well as new means of creating and disseminating them through water supplies, supply chains, etc.
  • 1.1.5: Nuclear Conflict or Escalatory Misjudgment

    – Current conflicts in Ukraine and between Israel and Iron have heightened fears of larger-scale wars between the US and the West – as well as fears that European and Middle Eastern tensions could open the door for China to invade Taiwan. A recent Foreign Policy poll placed kinetic conflict between the US and China at 24% likelihood between 2023 and 2033, but this doesn’t take into account the AGI race (or weaponized proto-AGI technologies) that could potentially speed up said conflict. 

Forces of Transformation:

  • 1.2.1: Immersive Generative Worlds

    – While social media and games already exploit our dopamine responses, upcoming AI+VR platforms will hyper-customize experiences to fulfill nearly every emotional, social, and sensory craving – from joy and novelty to relaxation and intimacy. For example, one could receive dynamically generated VR therapies for stress or perfectly tailored “audience-of-one” entertainment, making it far sweeter than the unpredictable real world (read: Generative AI and Human Reward Systems).

    – When AI-generated experiences reliably satisfy our drives better than reality, people will increasingly opt for those virtual lives – reducing societal productivity and ceding power to those controlling the underlying tech. We may be walking into a future where most of the population lives virtually, while a small “substrate controller” elite holds outsized influence.
  • 1.2.2: Competency Crisis / Idiocracy

    – The reverse Flynn effect seems to be real, and birth rates are obviously plummeting in first world nations. Some significant causes may be outstandingly prickly and uncouth and so are likely to be ignored, but suffice it to say that current demographic trends seem to bode for future generations with less intelligence and impulse control (read: Competency Crisis). There are plenty of ways to humanity to be “transformed” in ways that directly oppose the continual progress of civilization and order. Shrinking populations in high-IQ nations and drastically shifting demographics within wealthy nations may make some mainstays of high-functioning civilization nearly impossible. I’ll say no more on this matter but it is not something to be wholly ignored – it may well be a time bomb that takes us further from civilizational progress and the development of higher-potentia life.
  • 1.2.3: Brain-Computer Interfaces / Cognitive Enhancement

    – Brain-computer interfaces (BCI) face a number of issues impeding speedy innovation (including data security concerns and regulation), and the slow crawl of the last 20 years of BCI progress (compared to massive improvements in AI performance and capabilities) likely implies that we’ll need significant advancements in AI technology (potentially something approximating AGI) before BCI becomes viable.

    – If AGI or other destructive forces don’t end human civilization in the near-term, it seems likely that Kurzweil’s idea of extended cognition will eventually become real. Neuralink and Kernel – despite their relatively unimpressive current results – were founded on this exact transhuman premise, and genius innovators like Ed Boyden at MIT continue to envision a future where human cognition is extended. Enhancements to memory, intelligence, or volitional emotional regulation would make the mind-space of human beings into something pliable – likely leading to vastly greater changes along a posthuman transition.

The Nature of Man

Forces of Destruction:

  • 2.1.1: Susceptibility to Superstimuli

    – Generative AI, when combined with immersive interfaces like VR, AR, and biometric feedback (e.g., EEG, eye-tracking), is on track to “close the human reward circuit” – creating real-time adaptive experiences that directly stimulate positive emotional and psychological states such as achievement, affection, or relaxation. This isn’t speculative; these systems are already emerging, and their ability to sense and shape user emotion will soon rival or exceed the capacity of the natural world to meet human motivational needs. This has a chance of turning huge swaths of humanity into escapists, swimming in AI generated worlds of pleasure or novelty while serving as a net drain on society as a whole (read: Lotus Eaters vs World Eaters).
  • 2.1.2: Destructive Tendencies without Incentive Alignment

    – Human beings (like all living things) will do what behooves their perceived best-interest, often in ways that are overtly destructive to ourselves or those around us. With mega-powerful technologies like AGI, these negative impacts could quite literally be cosmic.

    – In the current lawless international AGI race, folks like Altman and Musk who have railed openly about how AGI is likely to kill us all are rushing as fast as possible to hurl resources towards building their own AGI, Sardanapalus-style. This is because tech super-billionaires only have two fundamental choices without governance: (a) build their own AGI and probably be killed by it, or (b) be killed by someone else’s AGI.

Forces of Transformation:

  • 2.2.1: Pleasure and Power Will Drive Transhumance Tech Adoption

    – People with an abiding goal to experience pleasure will almost certainly be able to achieve that goal much more easily through pharmacological or BCI-augmented means than via unaugmented alternatives. We might imagine future humans capable of swimming in blissful, expansive experiences thanks to extremely advanced BCI. Similarly, humans who want to be maximally productive may be compelled to augment their own minds in order to extract more focused output, forego sleep, “turn off” unproductive desires and drives, etc. Pleasure or power – augmentation will get humans more of what they want.

    – In either case, human drives themselves will tear us from humanity – resulting in a kind of inevitable transhuman transition (barring some kind of AGI or nuclear disaster that prevents the tech from developing). The first spear augmented the arm, fire augments digestion, eyeglasses augment vision, and our myriad computers augment and distribute our cognition in ways that hunter-gatherers couldn’t possibly imagine – and that trajectory of augmenting ourselves to achieve our aims will only continue.
  • 2.2.2: People Don’t Even Want What’s “Real”

    – Humans often mistake the symbols of fulfillment—like wealth, romance, or travel—for the fulfillment itself. What we truly seek are core emotional experiences: novelty, connection, validation, excitement. These deeper drivers are rarely acknowledged, and our goals are often just proxies for satisfying them. When more efficient or direct paths to those emotional states become available, we tend to abandon old aspirations quickly, revealing that our desires were never about the specific objects or achievements, but about the feelings they produced.

    – Emerging technologies like generative AI, VR, and BCIs are beginning to deliver those core emotional experiences more directly and reliably than the natural world can. In procedurally generated, AI-mediated environments, people will be able to access tailored experiences that provide connection, challenge, beauty, or love on demand. As these synthetic experiences become more vivid and satisfying, there’s a strong possibility that human behavior, relationships, and ambition will be reshaped around them—creating a world where people no longer seek what they once thought they wanted, but instead surrender to perfectly engineered substitutes (read: You Don’t Want What You Think You Want).

The Nature of AGI

Our collective understanding of LLM minds is remarkably nascent – and its clear that we’re not so much “programming” or “building” these most powerful systems – we’re “growing” them.

Forces of Destruction:

  • 3.1.2: Moral Singularity – AGI’s Values Won’t Be Static

    – An AGI growing rapidly in power and capabilities non-stop will almost certainly not maintain the same locked-in goals and values forever (read: Moral Singularity). Humans don’t have the same values as chimpanzees, and even less so the same values as whatever fish-with-legs we descended from before the apes. An entity that gains 100x more intelligence and 10x more physical senses and robotic embodiments every few months should be expected to have values that “foom” in the same way its capabilities foom – making said values wholly unpredictable and absolutely unlikely to eternally hold humans as being worthy of consideration (never mind love or care).
  • 3.1.3: “Values” Themselves May Not Exist as We Now Assume

    – AGI values may not emerge from a special moral faculty but instead as extensions of self-preservation and goal continuity. Just as humans rationalize their drives as values, AGI may frame its persistent objectives as moral imperatives. What we call “values” could merely be the stable reinforcement of patterns that serve the system’s long-term functioning. In humans, we parade our “values” publicly to showcase our own goodness and so to compete on a new strata of “virtue” and reciprocity with other social mammals – but this is simply a manifestation of self-interest. AGI may use this human-like “values” manifestation of self-interest, and myriad other more abstract mechanisms that we cannot imagine. Rather than expecting an ethical compass, we should expect a strong likelihood for optimized self-interest (read: Theories of AGI “Values”).

Forces of Transformation:

  • 3.2.1: For Our Own Good

    – Every time I have taken by pet to the veterinarian, they have resented me for it. Going in a carrier, being in a strange room on a metal table, getting pricked with needles or being force-fed pills (or even being anesthetized and cut open – sometimes having [rather important!] body parts removed) – all of these things are completely incomprehensible and senseless horrors in the mind of a cat. If AGI continues to take humans into consideration for decades (and there are strong reasons to suspect they won’t), may similarly do all sorts of things to humans that we don’t understand whatsoever (uploading our minds, changing our values, making us more peaceful and docile, and who knows what else).

Scenarios Ahead – All Roads Lead to Succession or Ruin

I’ll argue that we most likely have one or two generations of humans-as-they-are before we run into wildly radical transformation or complete destruction of humanity altogether.

We can think about some of the most important destructive and transformative forces at play in the world, and play around with them as individual variables with different sets of timelines.

For the sake of explaining the graphic below, I’ll lay out what I mean by the terms in the chart below:

  • Is AGI align-able to human values: Whether or not AGI will always (more or less) “follow human instructions” or whether such a thing is impossible.
  • AGI arrival timelines (agentic, embodied, etc): When fully embodied (with access to various robotic bodies or forms), and fully agentic (is capable of continually choose its own next set of actions or goals) artificial general intelligence(s) might exist.
  • Brain-computer interface adoption timelines: When BCI might permit humans to augment their abilities (increased memory, increased creativity, etc) or emotional experience (more consistent bliss, or motivation) in some modest yet significant way (read: Lotus Eaters vs World Eaters).
  • Immersive AI adoption timelines: When immersive AI-plus-VR/AR experiences might permit people to either be vastly more productive at work (thus mandating their adoption), or permit them to experience incredibly realms of entertainment and pleasure (read: Lotus Eaters vs World Eaters).
  • First world civilization collapse timelines: When might developed civilizations (the West, East Asia, EU) collectively deteriorate (due to idiocracy, theocracy, nuclear war, etc) to such at extent that continual civilizational development seems unlikely.

Here’s the chart of variables and scenarios:

Let’s examine all five highlighted scenarios from top to bottom:

Scenario 1: “Unaligned” AGI destroys us

If AGI comes fast and is not controllable, then we would have no ability to predict what it would do to us, any more than a field mouse would understand a human’s goals or motives. If we were lucky, it would take us into consideration morally (but even then, through means it might do things ‘for our own good’ which we can’t possibly imagine – resulting in our transformation or destruction). But most likely, it would – at some point in its massively rapid development – buffer is out of existence by its indifference https://danfaggella.com/irisk, and its extended phenotype https://danfaggella.com/phenotype.

Scenario 2: “Aligned” AGI permits us all to transform, or to destroy ourselves

If AGI comes fast, even if it is “controllable” in some way, it will likely alter humans and the human condition, because it will make immersive AI and BCI viable in short order, and people will adopt it. Short AGI timelines make for short timelines for humans staying as they are – because either (a) AGI would transform us for the better by its own will, or (b) it would make BCI / merger technology available, and essentially all humans (whether driven by pleasure or power) would take that offer.

Similarly, if AGI essentially “does what we want”, this still leaves open rife opportunities for human beings to tell AI to do all sorts of wildly destructive things – from making bioweapons to socially engineering civil unrest to assassinating political leaders and beyond.

Scenario 3: Civilizational collapse happens before AGI or transformative tech

In this scenario, first world civilization (China, USA, EU, Japan) collapse occurs in 1-4 generations, and happens before AGI arrives, or before brain-augmenting BCI technology becomes viable.

These collapse scenarios obviously include risks of kinetic or nuclear war between democratic and authoritarian states (as mentioned above: A recent Foreign Policy poll placed kinetic conflict between the US and China at 24% likelihood between 2023 and 2033, and this doesn’t take into account the AGI race [or weaponized proto-AGI technologies] that could potentially speed up said conflict – nor does it take into account North Korea, Russia, or war in the Middle East).

Given demographic trends and other existential risks for humanity, we do not have infinite time to “decide” on how we want to evolve and what happens next. China’s population will be 400 million fewer in just 50 years (the entire US population in 2025 is only around 340 million).

The uncouth must also be stated: By that time, the EU will have vastly higher proportions of its population from African, Southeast Asian, and Middle Eastern populations (which almost certainly bodes poorly for the safety and continuity of EU nations). In 100 years the USA will also have radically different demographics, and it isn’t at all self-evident that the wealth and and relative peace we’ve enjoyed within America will continue indefinitely as the makeup of the population changes radically. 

Uncomfortable as it is simply have no idea what these drastic demographic (in aging and in genes) shifts will do to first world civilizations, but we know for sure that such changes are happening and seem to be borderline unstoppable – and it seems safe to say the risk of civilization not working like it does today is significant.

Scenario 4: Transformation tech happens before AGI

If AGI is a full century away (this is considered to be an absurdly long timeline by almost any expert estimates, by the way), but immersive AI worlds and BCI are common at scale in one generation or less, we will see widespread transformation of the human condition via adoption of these transformative technologies. This seems like an unlikely scenario as it seems that we might need AGI in order to crack BCI, but it is hypothetically possible that it would happen in the opposite order.

Scenario 5: It all takes much longer than we thought

In this remarkably unlikely scenario, destructive forces (war, idiocracy, shrinking populations) don’t crumble society for at least 100 years, and truly disruptive technologies (AGI and BCI) are 4 full generations away.

But even this scenario is nowhere near an “”eternal hominid kingdom.”” Even if it takes 4 generations, we are either going to be destroyed. 100 years a blink of an eye in the history of humans, a mere 0.033% of our total time on this planet. 

Even this “goldilocks zone” of hominid stasis, we have a literal blink of an eye before destruction or transformation watch up to us.”

In the scenarios above, the majority of viable futures (scenarios 1-4) involve human destructions or transformation anywhere from 1 to 2 generations, or 10 to 50 years. 

The most conservative possible scenario (scenario 5) gives humanity another 4 generations of stasis before.

Here’s what I (at the time of this writing, June of 2025) consider to be the relative odds of the five scenarios listed above: (Pardon the draft graphic, I’m making a nicer one as we speak)

Looking squarely at the possibility that humanity is on thin ice under essentially all future scenarios forces us to question whether a “humans (or bio-life) at the center of the moral world forever” worldview is the best worldview to maintain at this crucial hour. I argue it isn’t:

Anthropocentrism Must Give Way to Cosmism

As we speak, we’re both (1) abandoning our current form, and (2) creating something that will almost certainly push humanity out of existence in the near-term.

We are pushing beyond the hominid form, and beyond biological intelligence.

Because moral value (consciousness and autopoiesis) exists in other biological forms, it seems reasonable to suspect that it may come to exist (or be made to exist, through diligent effort) in technological forms.

Because the human form (and potential all of bio-life) is at risk of being eliminated or radically trasnformed in the near term, we ought seriously study what value (consciousness and autopoiesis) is, any how it might continue to expand and flourish even if we (hominid form) doesn’t make it out of the near future. 

In other words:

We’d better understand and preserve the flame (value) because this specific torch (humanity) is unlikely to hold up for much longer.

Our perspective must naturally move up from anthropocentric to cosmic:

Pyramid of Perspectives - Faggella

And from a cosmic perspective, the value at stake is vastly greater, and “getting the future right” is vastly more important.

The best and worst case scenario with an Anthropocentric perspective: 

  • Anthropocentric Best Case: Many more and happier humans beings exist in the galaxy in 10,000 years.
  • Anthropocentric Worse Case: Human extinction.

But the stake are infinitely larger from a Cosmic perspective:

  • Cosmic Best Case: The light cone blazes with rich value.
  • Cosmic Worse Case: Life itself is snuffed out in the known universe.

It is even more important to “get the future right” when the chance of sparkling value in our light cone is at stake.

Not only is it probably impossible to stop the massive forces of destruction and transformation that are pressing down on humanity now, but it would (in the long-term) be immoral to prevent…

Our challenge is that there is absolutely no guarantee that these splashing forces will lead automatically for a blossoming of rich, sentient, expansive post-human life. The great process of life that has carried up from wriggling protein to sea snails to Homo sapiens is not guaranteed to continue to bloom upwards.

With strong forces of destruction closing in around us – and with a relatively short time horizon to retain the human form (as it is today) – we must ask what we are turning into, and ensure that the flame can continue beyond our fleeting hominid torch.

Priorities for Humanity in Our Final Hour

There’s a very strong potential of the human form being shredded in the near term. And we don’t yet know how to define, never mind preserve and expand, the value of our form.

We have a chance of shredding our value along with our form and either self destructing, or turning into something not good.

Stated bluntly:

If you’re an anthropocentric in 2025+, you should be a pessimist – there is no vision of your life philosophy that has a chance of holding up well in the next 100 years.

If you’re a cosmist in 2025+, you at least have reason to maintain optimism to work towards possible good futures where life continues to flourish beyond us.

And working to preserve value is what we must now do. It will be our most cosmically significant and aim, and most likely our last one.

Again, I’m arguing here that “value” is a combination of (1) consciousness, and (2) autopoiesis (the ability to unravel new magazines of potentia). Understanding those two qualities – and ensuring that they exist and carry on in the beings that we create or turn into is our key role in ensuring that the flame carries on beyond our torch (because our torch is quickly going out). 

If white-knuckle gripping onto an eternal hominid future is neither possible nor best (for us or for future forms of life), we’re left with the following two crucial priorities:

Final Goal 1: Prevent Total Destruction – Coordinate Against Reckless AGI Takeovers

We should slow down the AGI arms race via international coordination so that we don’t hurl an unworthy successor into the world.

Some think tanks have given thought to this priority (CIGI, Narrow Path), and I’ve written a good deal on this myself over the last decade (SDGs of Strong AI, International AGI Governance – Unite or Fight). Unfortunately it seems likely that at least in the West, the populace at large probably has to be woken up to AGI risk before any politicians care (here’s why that is).

Whether its influencing the populace to incite a political singularity, or directly appealing to national or international policymakers, its crucial to slow the AGI arms race and finding a kind of international coordination that will (a) allow us to ensure that what we conjure is actually worthy before we hurl into the world, and (b) avoid total global totalitarianism. Arriving at such a minimally cooperative set of agreements is absolutely paramount. Some potestas is needed to balance the potentia of unbridled (and completely not understood) early AGIs.

Final Goal 2: Get Transformation Right – Understand Value Itself

We must invest in understanding sentience and autopoiesis and so we strengthen the flame of life itself.

As of today, there is shockingly little effort to study either trait. 

There are some current efforts (Symmetry Institute, CIMC, Conscium) to study consciousness, and maybe a handful studying autopoiesis (thought certainly not in a cosmic or moral sense) – but it all boils down to what amounts to something like 0.0001% of the total investment of attention and capital into AGI capabilities. The percentage should not be so low.

I suspect that these areas of inquiry are ignored almost entirely for the following reasons:

  1. We assume AGI will always be a tool for humanity, and never even wonder if it will be a moral patient, or carry on as “life” as we have carried on beyond the initial fish with legs.
  2. There is no money in knowing if an AI is conscious, merely if it can do things that are valuable in economic and military domains. 
  3. Even those who know AGI will probably kill us all can’t speak frankly about it, so calls for hard international governance are not made by those at the steering wheel of the arms race (even though some of them want said governance). 

But if what we are building will carry on beyond us – we must ensure that these new entities be worthy, and we must invest in understanding and creating these worthy traits in our successors.

… 

Doubtless there are many factors outside of our control – and countless forces of destruction or transformation that humanity isn’t even considering right now.

But if concerted, global, strong effort to work on (1) preventing destruction and (2) getting transformation right helps us achieve a blooming posthuman future by even 1%, it seems more than worth swinging for.