Green Eggs and Ham – Facing Future Technology and AI Like an Adult
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
As the AGI race intensifies, it brings to bear more questions about what humanity’s role is in the future of life and intelligence.
People ask:
“How can we stay alive?”
“Should we aim to be kept as pets or novelties?”
“Should we aim to merge with AGI itself, or upload our consciousness?”
In this essay I’ll argue that we shouldn’t merge or upload simply to ensure our own survival or to eternally extend hominid dominance of the cosmos, but to ensure that life itself (well beyond humans) continues surviving and unfolding new kinds of “value” into the universe.
We’ll discuss the following in order:
(Note: For the sake of this article “merger” represents any kind of brain-computer interface [invasive or non-invasive], and “upload” will mean any kind of porting of a human instantiation of consciousness out of wetware and into other non-biological substrates where it might hypothetically [a] operate faster, with more powers or capability, or with vastly fewer limitations, or [b] contribute to a greater aggregate noosphere of intelligences.)
Why should we merge or upload our consciousness?
For some of us, it seems untenable to remain relevant (or even alive) in a physical world totally dominated by vastly post-human AI entities, and merging or uploading seems like a pathway to survival.
For others, merger might be a way for the human lineage to still maintain dominance. Instead of losing our position to posthuman AGI entities, we might wield and control them, keeping human agency as the driving force of the future.
Nearly 100% of the reasons I hear posited for why humans should merge are anchored entirely in an Individualistic or Anthropocentric worldview:
A much greater tragedy than human attenuation would be the end of life itself – and the risk of insisting that human beings steer the future – or indeed even insisting that eternal human survival be a priority – risks putting out the flame for the sake of one torch (read also: The AGI Path of Blooming vs Servitude).
The categories of merger or upload scenarios can roughly be broken down based on their impact on humanity, and on the totality of posthuman life.
For the sake of this article, let’s presume that a scenario that “helps” implies a greater ability to survive, a greater wellbeing (more positive qualia), or more power or capability to act (more potentia).
With that in mind, we could break down all merger or upload scenarios down into four major categories:
Let’s examine each scenario directly:
Scenario 1: Waste – In this case, a human-AI merger or mind upload is completed or attempted, but it doesn’t net any real benefit.
Specific scenarios here might include:
This is the least desirable of all situations. If it were somehow recognized that mergers or uploads (either one type, or all of them) were extremely likely to be a waste, they likely shouldn’t be attempted at all.
Scenario 2: Freeloader – In this case, the humans who merge or upload get benefits in the form of greater wellbeing, lifespan, or capabilities, but posthuman or non-biological life either accrues no benefits, or is actively hindered by the exchange.
Specific scenarios here might include:
This is a situation where the flame of life is, on the aggregate, harmed by the preference for one torch of life (humanity). I would argue that this, again, is non-preferable, as it hurts to total possible light-cone of value.
Scenario 3: Sacrifice – In this case, humanity contributes meaningfully to the expanse of posthuman or non-biological life and intelligence, but humanity itself doesn’t benefit much at all.
Specific scenarios here might include:
This situation is positive for the aggregate light cone of possible value in the universe, but seems unfortunate for us individual instantiations of human consciousness (you and I).
That said, if attenuation is eminent anyway, and our only other option is (a) getting killed by an unworthy successor AI, or (b) finding some way to merge while we can with the AI and have potentially a bit more assurance that it has the traits we value – then option (b) isn’t so bad.
Scenario 4: Contributor – In this case, humanity’s condition (our lifespans, wellbeing, and/or capabilities) expand meaningfully, and in doing so, so do the lifespans, wellbeing, and capabilities of posthuman life.
Specific scenarios here might include:
Doubtless this is the ideal scenario, but it isn’t one that we can count on happening automatically.
Though, fortunately, it does seem likely that, at the time of this writing (May 2025), AI systems do lack a great number of traits that make human life and other lives valuable (notable, autopoiesis and sentience).
There may be some experiments in BCI of uploading that begin as a Waste, but which end up become a kind of Sacrifice. There might be other initiatives that start as a Contribution, and end up becoming a Freeloader situation. Across multiple iterations and initiatives we might expect to see many such scenarios happening in parallel.
Its unlikely the be the case that all possible merger and uploading scenarios are ubiquitously a Sacrifice or a Waste or anything else. And even if that did occur, they would doubtless change and morph into a new one of the four scenarios (most of which would likely be beyond my ability to put them into words).
It seems likely that given a long enough time horizon, most Contribution and Sacrifice scenarios will end in a Freeloader or Waste scenarios. Long-term, Kurzweil’s presumption that the biological part of any kind of hybrid intelligence is very likely to become less and less important over time – until biological systems (cells, cell walls, organelles, DNA) are simply not present – and new means and mediums of potentia creation and expansion take over.
We might use the analogy of the flame, which doesn’t (and shouldn’t) linger forever on a single torch:
We might talk about the Five Stages of Grief, and apply them to the posthuman transition:
In any case, we must accept that long-term, humanity – and probably all of biological life – attenuates. And it can either go extinct, or it can play a role in the transformation of a worthy and higher-potentia form that will live on beyond it:
From the perspective of stewarding the flame, there could be countless pros and cons to merger or mind uploading – but I’ve done my best to boil down what I consider to be the most important of each:
1. It is a more sure path of expanding flame, because we know humans are conscious.
We can debate whether AGI is conscious, but we don’t debate if humans are conscious. We have cephalized central nervous systems, and (unless you’re a solipsist), you can presume that if you, reader, are conscious, than so are the hominids around you.
Once more, we don’t have to wonder if humans are capable of at least some level of expanding potentia. We have opened up realms of cultural potentia, and technological potentia, and we might imagine that barring nuclear war, total population collapse, or idiocracy (some of which unfortunately seem closer to reality than we might like), we have more expanding to do – especially if our minds were radically upgraded.
There are certainly realms of potentia beyond humans (see image below), but we still have more to give.
In a world where we haven’t cracked a way to “prove” consciousness, it may be best to augment humans minds and make sure the flame stays lit. See the food truck analogy to see what I mean.
2. It may extend the relevance of some currently living humans.
In the near-term, augmented human beings might be able to eek out a way to contribute meaningfully to sciences, governance, economics, and other fields that are as far removed from human “science” or “governance” as humans from chimpanzees.
These augmented humans with vastly more memory, with new motivational systems (not fettered to the same drives for sex or significance that grips most hominids), with new senses (beyond the meager 5 we have today) – might not be able to “keep up” entirely with AGI, but they could help to steward early proto-AGI forward.
Long-term, there may be a broader kind of symbiosis where some humans are still useful, or where humans are cared for without being useful by some kind of benevolence of the higher-potentia entities. Or, long-term, we may simple attenuate or be wiped out intentionally to make room for other goals – to use atoms in a more meaningful way.
While it is possible that there is something nearly irreplaceable by human agency, and that uploaded human intelligence manages to be a better flame-carrier than any other substrate, but this seems wildly unlikely – particularly if AGI and significant mind augmentation already blur the lines significantly.
(Note: It might be argued that the extended survival of some humans doesn’t really matter from a cosmic alignment standpoint, and I suspect that is true. But as a human being who loves many currently living human beings, I would aim to ensure that the greater stream of life would be served by humans for a bit longer so that we might ensure not only a longer existence, and hopefully a useful and contributive (to the greater whole) existence.)
3. It may reveal insights into intelligence and help us better calibrate toward a worthy successor.
The process of mind uploading and AGI-human merger seems itself to be useful in understanding the nature of intelligence.
We have such a measly grasp of what “intelligence” truly means and how it operates (as Levin’s work shows to clearly), and we have an even worse conception of what “consciousness” is and how it works.
It seems very reasonable to suspect that the messy intersection of brain and machine (weather through brain prosthetics, “neural lace”, and any variety of approaches) would yield some important insights into the nature of some of the things we value (consciousness and autopoietic intelligence) – if for no other reason than the fact that we can give a report as to what they are experiencing (or what they’re capable of) once augmented and tinkered with.
It’ll be hard work, and dangerous work, but there may be many aspects of value which we couldn’t understand (and so, couldn’t be most capable of preserving and expanding) unless we pursued BCI head-on.
1. Drastically augmented minds are likely to come into conflict with each other.
For a huge bulk of our time on this earth, humans were at war, and did horrible, horrible things to each other.
Even as we speak wars rage, and unspeakable crimes are committed every day for financial gain, for sick pleasure, out of jealousy, or a million other reasons.
It’s frankly shocking that society is as (relatively) homocide-free, and that we’ve established governance and ways of bounding human incentives so that we mostly benefit each other rather than harm each other. As far as I’m concerned this is the towering achievement of our species beyond any specific technological advance (as the vast majority of such advances are predicated on civilization itself existing).
Now imagine the following:
To imagine that all of these humans would magically “share” the same physical world and stay within the control of “regular” (non-augmented) humans seems ridiculous.
A common trope here is the idea that all will be well and peaceful because “intelligence and compassion go hand in hand.” There is this pervasive belief that more intelligent entities necessarily cooperate more and would be compassionate to other life automatic.
This, in my opinion, wholly misunderstands the origin of “kindness.” People think “kindness = emergent selflessness that bubbles up from more intelligent life, and showers love on all other beings,” when it fact it is more like “kindness = coordination when it makes sense for the self-interest of the agent doing the acting.” I’ve written a separate article on this topic.
Suffice it say here that it would be absurd to suspect that a panoply of wildly divergent, powerfully augmented human superbeings would sing kumbaya with each other, never mind with regular humans.
2. Augmented minds will be more morally valuable than unaugmented ones, causing conflict.
Not only is it ridiculous to suspect that wildly augmented super-humans would get along peacefully, it would also be ridiculous to suspect that these superhumans would somehow see themselves as equal to regular (unaugmented) humans.
For example:
The friction caused by creating more morally valuable entities than humans would be immensely jarring, and might not only cause conflict with the augmented humans, but may also lead to public sentiment swaying heavily against such research.
3. Regular (unaugmented) humans almost certainly won’t be useful or valuable.
In addition to being less morally valuable in the eyes of superhumans (and many humans), regular unaugmented humans will also simple not end up being that useful.
Below an 82 IQ the US military won’t recruit you, even to be cannon fodder or to clean bathrooms, because below a certain amount of mental horsepower, people are genuinely less useful. Thankfully, in rich modern societies we are mostly able to care for completely non-contributing members of society, but those persons live completely at the behest of the more intelligent and capable people who run things.
As soon as augmentation becomes viable, the incentives for it to proliferate would be nearly impossible to stop.
Imagine if any of the following became possible:
These people would be drastically more effective in a variety of tasks, and (importantly) in the important meta-task of governing, of innovating, of doing almost everything – to the point to which they would eventually be supporting a larger class of mostly useless (in contributing to science, the economy, innovation, etc).
Some humans suspect that this would be wonderful because humans would definitely be treated well and respected as equals, but this seems wildly unlikely – and we should generally be wary of aspiring to be a Piping Plover.
It seems obvious that at the present time (I’m writing this in April of 2025), humanity is the best vessel for safely carry value forward.
AGI on its own isn’t autopoietic (yet?), and isn’t provably conscious. If by no other evidence than the inner experience you’re having reading this article, you can verify at least your own sentience.
Our torch is not an arbitrary one.
We seem very likely to build or to merge into whatever higher potentia processes and forms exist beyond us. We have the potential to to be a kind of Great Conduit to whatever is beyond us.
But we also have the potential to recklessly hurl unworthy successor AGIs into existence, to squash the flame of life instead of proliferating it.
Despite the (very serious) risks associated with merger and uploading (both for the individual humans involved, and for the conflicts caused by vastly upgraded humans entering the human order), it seems correct – at least as of today – for humanity to focus a significant amount of coordinated effort into carefully pursuing merger and upload paths.
In other words, the third major pro for merging or uploading likely outweighs all the cons:
At present, this kind of experimentation seems to be a crucial way to get a deeper understanding of the “valuable stuff” in order to optimize for and expand it. We can’t be stewards if the flame if we don’t even know what it is.
This will invariably involve the same kind of global coordination and governance that would be required to build AGI without immediately destroying ourselves. I wish us well on finding the goldilocks zone of governance that our present situation will require – it certainly won’t be easy.
..
Header image credit: artincontext.org
If you were a child at any point in the last 50 years, you’re probably familiar with the story. Sam I Am tries to get a protagonist (let’s call him…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…