Human-AI Merger and Mind Uploads – Pros, Cons, and Scenarios

As the AGI race intensifies, it brings to bear more questions about what humanity’s role is in the future of life and intelligence.

People ask:

“How can we stay alive?”

“Should we aim to be kept as pets or novelties?”

“Should we aim to merge with AGI itself, or upload our consciousness?”

In this essay I’ll argue that we shouldn’t merge or upload simply to ensure our own survival or to eternally extend hominid dominance of the cosmos, but to ensure that life itself (well beyond humans) continues surviving and unfolding new kinds of “value” into the universe.

We’ll discuss the following in order:

  • The purpose of a merger or mind upload
  • Scenarios of merger or mind uploading
  • The pros and cons of merger or uploading
  • Humanity as a great conduit to expanding potentia

(Note: For the sake of this article “merger” represents any kind of brain-computer interface [invasive or non-invasive], and “upload” will mean any kind of porting of a human instantiation of consciousness out of wetware and into other non-biological substrates where it might hypothetically [a] operate faster, with more powers or capability, or with vastly fewer limitations, or [b] contribute to a greater aggregate noosphere of intelligences.)

The Purpose of a Merger or Mind Upload

Why should we merge or upload our consciousness?

For some of us, it seems untenable to remain relevant (or even alive) in a physical world totally dominated by vastly post-human AI entities, and merging or uploading seems like a pathway to survival.

For others, merger might be a way for the human lineage to still maintain dominance. Instead of losing our position to posthuman AGI entities, we might wield and control them, keeping human agency as the driving force of the future. 

Nearly 100% of the reasons I hear posited for why humans should merge are anchored entirely in an Individualistic or Anthropocentric worldview:

A much greater tragedy than human attenuation would be the end of life itself – and the risk of insisting that human beings steer the future – or indeed even insisting that eternal human survival be a priority – risks putting out the flame for the sake of one torch (read also: The AGI Path of Blooming vs Servitude).

Scenarios of a Merger or Mind Upload

The categories of merger or upload scenarios can roughly be broken down based on their impact on humanity, and on the totality of posthuman life.

For the sake of this article, let’s presume that a scenario that “helps” implies a greater ability to survive, a greater wellbeing (more positive qualia), or more power or capability to act (more potentia). 

With that in mind, we could break down all merger or upload scenarios down into four major categories:

Possible Roles for Humanity

Let’s examine each scenario directly:

Four Scenarios of Merger and Uploading

Scenario 1: Waste – In this case, a human-AI merger or mind upload is completed or attempted, but it doesn’t net any real benefit. 

Specific scenarios here might include:

  • Mergers and uploads are attempted and fail
  • Mergers and uploads that are possible simply happen to have a net-negative impact on the humans effected (no market increase in power or wellbeing), and don’t accrue any benefits to (but do require time and resources from) an AI or AGI

This is the least desirable of all situations. If it were somehow recognized that mergers or uploads (either one type, or all of them) were extremely likely to be a waste, they likely shouldn’t be attempted at all.

Scenario 2: Freeloader – In this case, the humans who merge or upload get benefits in the form of greater wellbeing, lifespan, or capabilities, but posthuman or non-biological life either accrues no benefits, or is actively hindered by the exchange.

Specific scenarios here might include:

  • Humans who merge or upload get increased wellbeing, or power, but it is merely a cost upon the AI systems that grant these benefits to humans.
  • Humans use mergers or uploads to lock in a future dominated by humans and human-like concerns, fettering the larger expanse of power, value, and survivability that would be ushered in by greater forms of intelligence than mere augmented humans.

This is a situation where the flame of life is, on the aggregate, harmed by the preference for one torch of life (humanity). I would argue that this, again, is non-preferable, as it hurts to total possible light-cone of value.

Scenario 3: Sacrifice – In this case, humanity contributes meaningfully to the expanse of posthuman or non-biological life and intelligence, but humanity itself doesn’t benefit much at all.

Specific scenarios here might include:

  • Merger with human minds fairs to actually increase the powers of humans and/or mind-upload experiments fail to transfer consciousness – but both initiatives shed crucial light on the nature of consciousness and intelligence, and this helps the greater cause of expanding life itself.
  • A merger or mind-upload scenario may be necessary to “grant” some key ingredient of intelligence to non-biological substrates (such as “volition” or “consciousness” – both of which are example terms because we don’t understand much about this stuff right now, unfortunately). Humans themselves are more-or-less discarded as this crucial ingredient is handed off.

This situation is positive for the aggregate light cone of possible value in the universe, but seems unfortunate for us individual instantiations of human consciousness (you and I).

That said, if attenuation is eminent anyway, and our only other option is (a) getting killed by an unworthy successor AI, or (b) finding some way to merge while we can with the AI and have potentially a bit more assurance that it has the traits we value – then option (b) isn’t so bad.

Scenario 4: Contributor – In this case, humanity’s condition (our lifespans, wellbeing, and/or capabilities) expand meaningfully, and in doing so, so do the lifespans, wellbeing, and capabilities of posthuman life.

Specific scenarios here might include:

  • Brain-computer interfaces grant moral qualities and new powers and understand to AGI, and humans get a reasonable amount of time to enjoy their own period of being “leveled up.” A period of symbiosis ensues – with “ingredients” exchanged between biological and non-biological, with new structures emerging and being understood.

Doubtless this is the ideal scenario, but it isn’t one that we can count on happening automatically.

Though, fortunately, it does seem likely that, at the time of this writing (May 2025), AI systems do lack a great number of traits that make human life and other lives valuable (notable, autopoiesis and sentience).

Scenarios are Varied and Fluid, and Probably End in the Attenuation of Biological Life

There may be some experiments in BCI of uploading that begin as a Waste, but which end up become a kind of Sacrifice. There might be other initiatives that start as a Contribution, and end up becoming a Freeloader situation. Across multiple iterations and initiatives we might expect to see many such scenarios happening in parallel.

Its unlikely the be the case that all possible merger and uploading scenarios are ubiquitously a Sacrifice or a Waste or anything else. And even if that did occur, they would doubtless change and morph into a new one of the four scenarios (most of which would likely be beyond my ability to put them into words).

It seems likely that given a long enough time horizon, most Contribution and Sacrifice scenarios will end in a Freeloader or Waste scenarios. Long-term, Kurzweil’s presumption that the biological part of any kind of hybrid intelligence is very likely to become less and less important over time – until biological systems (cells, cell walls, organelles, DNA) are simply not present – and new means and mediums of potentia creation and expansion take over.

We might use the analogy of the flame, which doesn’t (and shouldn’t) linger forever on a single torch:

The Flame & the Torch (1)

We might talk about the Five Stages of Grief, and apply them to the posthuman transition

Posthuman Acceptance - Stages of Posthuman Grief

In any case, we must accept that long-term, humanity – and probably all of biological life – attenuates. And it can either go extinct, or it can play a role in the transformation of a worthy and higher-potentia form that will live on beyond it:

Handing UP the Baton - Four Viable End Games for Humanity

Pros and Cons of Merger and Uploading

From the perspective of stewarding the flame, there could be countless pros and cons to merger or mind uploading – but I’ve done my best to boil down what I consider to be the most important of each:

Uploading or AI Merger – Pros

1. It is a more sure path of expanding flame, because we know humans are conscious.

We can debate whether AGI is conscious, but we don’t debate if humans are conscious. We have cephalized central nervous systems, and (unless you’re a solipsist), you can presume that if you, reader, are conscious, than so are the hominids around you.

Once more, we don’t have to wonder if humans are capable of at least some level of expanding potentia. We have opened up realms of cultural potentia, and technological potentia, and we might imagine that barring nuclear war, total population collapse, or idiocracy (some of which unfortunately seem closer to reality than we might like), we have more expanding to do – especially if our minds were radically upgraded.

There are certainly realms of potentia beyond humans (see image below), but we still have more to give.

Potentia table 2 - Expansion in Various Domains Over Time-min

In a world where we haven’t cracked a way to “prove” consciousness, it may be best to augment humans minds and make sure the flame stays lit. See the food truck analogy to see what I mean.

2. It may extend the relevance of some currently living humans.

In the near-term, augmented human beings might be able to eek out a way to contribute meaningfully to sciences, governance, economics, and other fields that are as far removed from human “science” or “governance” as humans from chimpanzees. 

These augmented humans with vastly more memory, with new motivational systems (not fettered to the same drives for sex or significance that grips most hominids), with new senses (beyond the meager 5 we have today) – might not be able to “keep up” entirely with AGI, but they could help to steward early proto-AGI forward.

Long-term, there may be a broader kind of symbiosis where some humans are still useful, or where humans are cared for without being useful by some kind of benevolence of the higher-potentia entities. Or, long-term, we may simple attenuate or be wiped out intentionally to make room for other goals – to use atoms in a more meaningful way. 

While it is possible that there is something nearly irreplaceable by human agency, and that uploaded human intelligence manages to be a better flame-carrier than any other substrate, but this seems wildly unlikely – particularly if AGI and significant mind augmentation already blur the lines significantly.

(Note: It might be argued that the extended survival of some humans doesn’t really matter from a cosmic alignment standpoint, and I suspect that is true. But as a human being who loves many currently living human beings, I would aim to ensure that the greater stream of life would be served by humans for a bit longer so that we might ensure not only a longer existence, and hopefully a useful and contributive (to the greater whole) existence.)

3. It may reveal insights into intelligence and help us better calibrate toward a worthy successor.

The process of mind uploading and AGI-human merger seems itself to be useful in understanding the nature of intelligence.

We have such a measly grasp of what “intelligence” truly means and how it operates (as Levin’s work shows to clearly), and we have an even worse conception of what “consciousness” is and how it works.

It seems very reasonable to suspect that the messy intersection of brain and machine (weather through brain prosthetics, “neural lace”, and any variety of approaches) would yield some important insights into the nature of some of the things we value (consciousness and autopoietic intelligence) – if for no other reason than the fact that we can give a report as to what they are experiencing (or what they’re capable of) once augmented and tinkered with.

It’ll be hard work, and dangerous work, but there may be many aspects of value which we couldn’t understand (and so, couldn’t be most capable of preserving and expanding) unless we pursued BCI head-on.

Uploading or AI Merger – Cons

1. Drastically augmented minds are likely to come into conflict with each other.

For a huge bulk of our time on this earth, humans were at war, and did horrible, horrible things to each other.

Even as we speak wars rage, and unspeakable crimes are committed every day for financial gain, for sick pleasure, out of jealousy, or a million other reasons.

It’s frankly shocking that society is as (relatively) homocide-free, and that we’ve established governance and ways of bounding human incentives so that we mostly benefit each other rather than harm each other. As far as I’m concerned this is the towering achievement of our species beyond any specific technological advance (as the vast majority of such advances are predicated on civilization itself existing).

Now imagine the following:

  • One lab in Boston creates humans with drastically augmented memory, allowing them to access the internet and all human knowledge via BCI.
  • Another lab in Beijing allows humans to control multiple military drones at one time by tying their proprioception into guidance instructions for said machines.
  • Another lab in San Francisco allows humans to turn “on” or “off” certain emotional drives and reward circuits – such as allowing people to have a high wellbeing without human relationships, or lifting someone’s general level of focus or enthusiasm drastically at will.
  • Another lab somewhere else creates people who can do most of their “thinking” in outsourced substrates, effectively speeding up their thinking 100x, allowing them to write, research, decide, almost instantly.

To imagine that all of these humans would magically “share” the same physical world and stay within the control of “regular” (non-augmented) humans seems ridiculous. 

A common trope here is the idea that all will be well and peaceful because “intelligence and compassion go hand in hand.” There is this pervasive belief that more intelligent entities necessarily cooperate more and would be compassionate to other life automatic.

This, in my opinion, wholly misunderstands the origin of “kindness.” People think “kindness = emergent selflessness that bubbles up from more intelligent life, and showers love on all other beings,” when it fact it is more like “kindness = coordination when it makes sense for the self-interest of the agent doing the acting.” I’ve written a separate article on this topic.

Suffice it say here that it would be absurd to suspect that a panoply of wildly divergent, powerfully augmented human superbeings would sing kumbaya with each other, never mind with regular humans. 

2. Augmented minds will be more morally valuable than unaugmented ones, causing conflict.

Not only is it ridiculous to suspect that wildly augmented super-humans would get along peacefully, it would also be ridiculous to suspect that these superhumans would somehow see themselves as equal to regular (unaugmented) humans.

For example:

  • Imagine a human who could think 100x faster than anyone else around them. Doing the work of dozens of people in an hour. Deciding, researching, creating, etc. Would such a person not feel (rightly) that their life is worth more than a normal human, because they are capable of doing so much more than a normal human? If such augmented people were the only key vanguard force able to move forward science and innovation when regular humans are left behind, would these superhuman see themselves as our “equals”? I think rightly not.
  • Imagine a human augmented to experience a massive gradient of sentience experience beyond normal humans. Imagine 10x more oceans of bliss, including many emotional ranges that normal humans have no access to (as sea snails can’t appreciate a poem or understand humor). Would such a person think that they should suffer just the same as someone else? Would they not (rightly) consider their experience to be more valuable and that of regular humans?

The friction caused by creating more morally valuable entities than humans would be immensely jarring, and might not only cause conflict with the augmented humans, but may also lead to public sentiment swaying heavily against such research.

3. Regular (unaugmented) humans almost certainly won’t be useful or valuable.

In addition to being less morally valuable in the eyes of superhumans (and many humans), regular unaugmented humans will also simple not end up being that useful.

Below an 82 IQ the US military won’t recruit you, even to be cannon fodder or to clean bathrooms, because below a certain amount of mental horsepower, people are genuinely less useful. Thankfully, in rich modern societies we are mostly able to care for completely non-contributing members of society, but those persons live completely at the behest of the more intelligent and capable people who run things.

As soon as augmentation becomes viable, the incentives for it to proliferate would be nearly impossible to stop. 

Imagine if any of the following became possible:

  • To access the entire internet in real time via thought
  • To volitionally modulate their emotional experience in order to sustain focus or enthusiasm whenever it serves them, and even remove or downplay entire “reward circuits” (like the drive for sex, or for delicious foods)
  • To use a neural prosthetic of “extra cortex” that functionally improves IQ by 40 points

These people would be drastically more effective in a variety of tasks, and (importantly) in the important meta-task of governing, of innovating, of doing almost everything – to the point to which they would eventually be supporting a larger class of mostly useless (in contributing to science, the economy, innovation, etc).

Some humans suspect that this would be wonderful because humans would definitely be treated well and respected as equals, but this seems wildly unlikely – and we should generally be wary of aspiring to be a Piping Plover.

Humanity as a Great Conduit to Expanding Potentia

It seems obvious that at the present time (I’m writing this in April of 2025), humanity is the best vessel for safely carry value forward.

AGI on its own isn’t autopoietic (yet?), and isn’t provably conscious. If by no other evidence than the inner experience you’re having reading this article, you can verify at least your own sentience.

Our torch is not an arbitrary one.

We seem very likely to build or to merge into whatever higher potentia processes and forms exist beyond us. We have the potential to to be a kind of Great Conduit to whatever is beyond us.

But we also have the potential to recklessly hurl unworthy successor AGIs into existence, to squash the flame of life instead of proliferating it.

Despite the (very serious) risks associated with merger and uploading (both for the individual humans involved, and for the conflicts caused by vastly upgraded humans entering the human order), it seems correct – at least as of today – for humanity to focus a significant amount of coordinated effort into carefully pursuing merger and upload paths.

In other words, the third major pro for merging or uploading likely outweighs all the cons:

At present, this kind of experimentation seems to be a crucial way to get a deeper understanding of the “valuable stuff” in order to optimize for and expand it. We can’t be stewards if the flame if we don’t even know what it is.

This will invariably involve the same kind of global coordination and governance that would be required to build AGI without immediately destroying ourselves. I wish us well on finding the goldilocks zone of governance that our present situation will require – it certainly won’t be easy.

..

Header image credit: artincontext.org