Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Given a long enough time horizon, all things (individuals, species, forms) die out completely or transform into something else. Lucretius and the second law of thermodynamics concur here.
In this article, I’m going to argue that humanity has remarkably short timelines – maybe 1-2 decades at most – to either be destroyed or to transform into something very much other than our present hominid form.
I’ll lay out the destructive and transformational implications of three different forces of change on our current condition:
From there, I’ll address the different possible scenarios that might occur, and how each might impact the length of the timelines for humanity’s destruction of transformation – and I’ll end with what I consider to be our final priority: Ensuring that life and value bloom beyond us into the cosmos.
I wish dearly that humanity had another 50 years to make the hard decisions about what kind of future to build – but I suspect we have no such luxury (see the section below Scenarios Ahead).
We’re building machines that seem rather likely to push us out of existence, while other forces will drive us to augment our brains to keep up with the forces of creative destruction. If the human form is shredded before we can ensure that whatever is after us has those most essential morally relevant qualities, we may end up snuffing out life and value in our light cone. But if we get it right, the entire light cone might sparkle with blazing value.
As it stands, the entire bathtub (of the human form) is getting dumped out the window, and we’d better quickly find the save the baby (moral value) while we have a chance.
We’ll start with a look at the forces at change that humanity has to contend with:
Our collective understanding of LLM minds is remarkably nascent – and its clear that we’re not so much “programming” or “building” these most powerful systems – we’re “growing” them.
I’ll argue that we most likely have one or two generations of humans-as-they-are before we run into wildly radical transformation or complete destruction of humanity altogether.
We can think about some of the most important destructive and transformative forces at play in the world, and play around with them as individual variables with different sets of timelines.
For the sake of explaining the graphic below, I’ll lay out what I mean by the terms in the chart below:
Here’s the chart of variables and scenarios:
Let’s examine all five highlighted scenarios from top to bottom:
If AGI comes fast and is not controllable, then we would have no ability to predict what it would do to us, any more than a field mouse would understand a human’s goals or motives. If we were lucky, it would take us into consideration morally (but even then, through means it might do things ‘for our own good’ which we can’t possibly imagine – resulting in our transformation or destruction). But most likely, it would – at some point in its massively rapid development – buffer is out of existence by its indifference https://danfaggella.com/irisk, and its extended phenotype https://danfaggella.com/phenotype.
If AGI comes fast, even if it is “controllable” in some way, it will likely alter humans and the human condition, because it will make immersive AI and BCI viable in short order, and people will adopt it. Short AGI timelines make for short timelines for humans staying as they are – because either (a) AGI would transform us for the better by its own will, or (b) it would make BCI / merger technology available, and essentially all humans (whether driven by pleasure or power) would take that offer.
Similarly, if AGI essentially “does what we want”, this still leaves open rife opportunities for human beings to tell AI to do all sorts of wildly destructive things – from making bioweapons to socially engineering civil unrest to assassinating political leaders and beyond.
In this scenario, first world civilization (China, USA, EU, Japan) collapse occurs in 1-4 generations, and happens before AGI arrives, or before brain-augmenting BCI technology becomes viable.
These collapse scenarios obviously include risks of kinetic or nuclear war between democratic and authoritarian states (as mentioned above: A recent Foreign Policy poll placed kinetic conflict between the US and China at 24% likelihood between 2023 and 2033, and this doesn’t take into account the AGI race [or weaponized proto-AGI technologies] that could potentially speed up said conflict – nor does it take into account North Korea, Russia, or war in the Middle East).
Given demographic trends and other existential risks for humanity, we do not have infinite time to “decide” on how we want to evolve and what happens next. China’s population will be 400 million fewer in just 50 years (the entire US population in 2025 is only around 340 million).
The uncouth must also be stated: By that time, the EU will have vastly higher proportions of its population from African, Southeast Asian, and Middle Eastern populations (which almost certainly bodes poorly for the safety and continuity of EU nations). In 100 years the USA will also have radically different demographics, and it isn’t at all self-evident that the wealth and and relative peace we’ve enjoyed within America will continue indefinitely as the makeup of the population changes radically.
Uncomfortable as it is simply have no idea what these drastic demographic (in aging and in genes) shifts will do to first world civilizations, but we know for sure that such changes are happening and seem to be borderline unstoppable – and it seems safe to say the risk of civilization not working like it does today is significant.
If AGI is a full century away (this is considered to be an absurdly long timeline by almost any expert estimates, by the way), but immersive AI worlds and BCI are common at scale in one generation or less, we will see widespread transformation of the human condition via adoption of these transformative technologies. This seems like an unlikely scenario as it seems that we might need AGI in order to crack BCI, but it is hypothetically possible that it would happen in the opposite order.
In this remarkably unlikely scenario, destructive forces (war, idiocracy, shrinking populations) don’t crumble society for at least 100 years, and truly disruptive technologies (AGI and BCI) are 4 full generations away.
But even this scenario is nowhere near an “”eternal hominid kingdom.”” Even if it takes 4 generations, we are either going to be destroyed. 100 years a blink of an eye in the history of humans, a mere 0.033% of our total time on this planet.
Even this “goldilocks zone” of hominid stasis, we have a literal blink of an eye before destruction or transformation watch up to us.”
…
In the scenarios above, the majority of viable futures (scenarios 1-4) involve human destructions or transformation anywhere from 1 to 2 generations, or 10 to 50 years.
The most conservative possible scenario (scenario 5) gives humanity another 4 generations of stasis before.
Here’s what I (at the time of this writing, June of 2025) consider to be the relative odds of the five scenarios listed above: (Pardon the draft graphic, I’m making a nicer one as we speak)
Looking squarely at the possibility that humanity is on thin ice under essentially all future scenarios forces us to question whether a “humans (or bio-life) at the center of the moral world forever” worldview is the best worldview to maintain at this crucial hour. I argue it isn’t:
As we speak, we’re both (1) abandoning our current form, and (2) creating something that will almost certainly push humanity out of existence in the near-term.
We are pushing beyond the hominid form, and beyond biological intelligence.
Because moral value (consciousness and autopoiesis) exists in other biological forms, it seems reasonable to suspect that it may come to exist (or be made to exist, through diligent effort) in technological forms.
Because the human form (and potential all of bio-life) is at risk of being eliminated or radically trasnformed in the near term, we ought seriously study what value (consciousness and autopoiesis) is, any how it might continue to expand and flourish even if we (hominid form) doesn’t make it out of the near future.
In other words:
We’d better understand and preserve the flame (value) because this specific torch (humanity) is unlikely to hold up for much longer.
Our perspective must naturally move up from anthropocentric to cosmic:
And from a cosmic perspective, the value at stake is vastly greater, and “getting the future right” is vastly more important.
The best and worst case scenario with an Anthropocentric perspective:
But the stake are infinitely larger from a Cosmic perspective:
It is even more important to “get the future right” when the chance of sparkling value in our light cone is at stake.
Not only is it probably impossible to stop the massive forces of destruction and transformation that are pressing down on humanity now, but it would (in the long-term) be immoral to prevent…
Our challenge is that there is absolutely no guarantee that these splashing forces will lead automatically for a blossoming of rich, sentient, expansive post-human life. The great process of life that has carried up from wriggling protein to sea snails to Homo sapiens is not guaranteed to continue to bloom upwards.
With strong forces of destruction closing in around us – and with a relatively short time horizon to retain the human form (as it is today) – we must ask what we are turning into, and ensure that the flame can continue beyond our fleeting hominid torch.
There’s a very strong potential of the human form being shredded in the near term. And we don’t yet know how to define, never mind preserve and expand, the value of our form.
We have a chance of shredding our value along with our form and either self destructing, or turning into something not good.
Stated bluntly:
If you’re an anthropocentric in 2025+, you should be a pessimist – there is no vision of your life philosophy that has a chance of holding up well in the next 100 years.
If you’re a cosmist in 2025+, you at least have reason to maintain optimism to work towards possible good futures where life continues to flourish beyond us.
And working to preserve value is what we must now do. It will be our most cosmically significant and aim, and most likely our last one.
Again, I’m arguing here that “value” is a combination of (1) consciousness, and (2) autopoiesis (the ability to unravel new magazines of potentia). Understanding those two qualities – and ensuring that they exist and carry on in the beings that we create or turn into is our key role in ensuring that the flame carries on beyond our torch (because our torch is quickly going out).
If white-knuckle gripping onto an eternal hominid future is neither possible nor best (for us or for future forms of life), we’re left with the following two crucial priorities:
We should slow down the AGI arms race via international coordination so that we don’t hurl an unworthy successor into the world.
Some think tanks have given thought to this priority (CIGI, Narrow Path), and I’ve written a good deal on this myself over the last decade (SDGs of Strong AI, International AGI Governance – Unite or Fight). Unfortunately it seems likely that at least in the West, the populace at large probably has to be woken up to AGI risk before any politicians care (here’s why that is).
Whether its influencing the populace to incite a political singularity, or directly appealing to national or international policymakers, its crucial to slow the AGI arms race and finding a kind of international coordination that will (a) allow us to ensure that what we conjure is actually worthy before we hurl into the world, and (b) avoid total global totalitarianism. Arriving at such a minimally cooperative set of agreements is absolutely paramount. Some potestas is needed to balance the potentia of unbridled (and completely not understood) early AGIs.
We must invest in understanding sentience and autopoiesis and so we strengthen the flame of life itself.
As of today, there is shockingly little effort to study either trait.
There are some current efforts (Symmetry Institute, CIMC, Conscium) to study consciousness, and maybe a handful studying autopoiesis (thought certainly not in a cosmic or moral sense) – but it all boils down to what amounts to something like 0.0001% of the total investment of attention and capital into AGI capabilities. The percentage should not be so low.
I suspect that these areas of inquiry are ignored almost entirely for the following reasons:
But if what we are building will carry on beyond us – we must ensure that these new entities be worthy, and we must invest in understanding and creating these worthy traits in our successors.
…
Doubtless there are many factors outside of our control – and countless forces of destruction or transformation that humanity isn’t even considering right now.
But if concerted, global, strong effort to work on (1) preventing destruction and (2) getting transformation right helps us achieve a blooming posthuman future by even 1%, it seems more than worth swinging for.
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…