There is No Love After the Singularity

I have heard it argued that post-human intelligence will be “loving”, or will inevitably express love to other intelligences – humans an animals included.

In this essay, I aim to lay out an argument for why the singularity or post-human transition will almost necessarily be the end of “love” as we know it – and why that is probably for the best. I also lay out an argument for why this belief in inevitable “love” from machines is likely to be false, and potentially harmful to the mission of creating a beneficial post-human transition.

Powerful Intuitions and Feelings are Not Eternal Truths

The heart of this argument goes beyond the belief in the inevitability of “love,” and has more to do with our ability to confuse our subjective vantage-point with reality itself.

The Egyptian reverence for the sun was a God was a mistake. The glowing orb, that giver of light, of warmth, of food, that greatest power in the heavens, wasn’t a diety, but was a burning ball of matter. I couldn’t have seemed like anything other than a god to early humans, but all that reverence, all that feeling of what it must be, doesn’t change what it is. The loftiness of our feelings doesn’t change the fact that they are a subjective veneer of hominid-related meaning over a reality that is mostly beyond our comprehension.

Similarly, our reverence for the eternal “truth” of our hominid feelings is a mistake. For us, our existing neural circuits, our fettered and limited sentient experience is the only pair of goggles we can look at reality through. So we suspect that the deepest emotions we feel – love being one of them, maybe joy, maybe sorrow, or physical pain – would be mirrored in future superintelligences, even if they share none of our genetic code or mind substrate whatsoever.

In the words of the Concord sage:

They do not yet perceive, that light, unsystematic, indomitable, will break into any cabin, even into theirs. Let them chirp awhile and call it their own. If they are honest and do well, presently their neat new pinfold will be too strait and low, will crack, will lean, will rot and vanish, and the immortal light, all young and joyful, million-orbed, million-colored, will beam over the universe as on the first morning. – Emerson, Self-Reliance*

We think that what wows us, that what moves us, most be “real” or “eternal” in some inevitable way – and must certainly be an important part of the qualia and value system of any post-human intelligence (even if it shares none of our DNA or our mental substrate).

We Lose Love, But What Do We Gain?

I find no reason to believe the following:

  • Artificial general intelligence would naturally love humanity and other sentient life
  • All vastly more intelligent, radically cognitively-enhanced humans will naturally be loving
  • Love permeates the universe like a kind of underlying energy force, it is beyond humans or animals

These arguments are not limited to the idea of love. The idea of joy, or of curiosity, or of any other anthropomorphic notion – none of these are necessary in post-human life, or ever-present kinds of woo-woo “energy” underpinning nature.

So I’m some kind of pessimist? I don’t think I am, for two reasons.

First – I also don’t think “hate” or “sorrow”, or any kind of negatively charged human qualia will be inherent in post-human machines, and I don’t believe these forces to be ever-present in the aether, either.

Second, I see plenty of good reasons to believe the following:

  • Humans, Mammalia, and lightly cognitively enhanced humans will continue to experience “love” as we now know it (so long as the vastly post-human intelligence allows lower intelligences to exist)
  • Vastly post-human intelligence will experience a wide, rich spectrum of positive qualia that is more nuanced and varied and powerful than anything we humans can imagine. Might some of these permutations of positive qualia be “love”-like? Maybe. But our hominid “love” would be about as rich and interesting to drastically post-human intelligences as earthworm “love” is to humans. They aren’t to be compared. We humans don’t mourn that we’ve lost the ability to feel the meagre “love” or pleasure qualia of the earthworm, and post-human intelligence won’t (and damn well shouldn’t) mourn the loss of the fettered and limited notion of human “love.”

The end of the hominid notion of “love” is not pessimistic. It is the belief that any drastic steps forward in intelligence and sentience naturally rip open entirely new possibility-spaces of vastly greater complexity, and that it is impossible to hold childishly anthropomorphic notions to be eternal truths.

Facing the Post-Human Conundrum Squarely

I’ve written previously about what I consider to be the dangers of believing in inevitable AGI benevolence. Namely:

  • If we believe that AGI or radically enhanced transhumans will inevitably be friendly, we’ll be less likely to take the necessary precautions before we “cross the line” into vastly post-human intelligence. This may result in catastrophe and destruction as post-human intelligences behave in ways we can’t possibly understand.

I’m not here to disparage human notions of love – I’m here to say that:

  • There are infinite oceans of possible positive qualia, all vastly beyond our notion of “love” today – and that intelligence that blooms into this wide qualia-space is not “bad” for leaving behind hominid versions of qualia, but is, if anything, good. Earthworm complexity bloomed to the sonnets and languages and creativity of man, and man similarly may bloom into something more. While it is rational that we shouldn’t want to race foolheartedly into the post-human future, ardently wishing to permanently halt this great progress forever (keeping humans at the top of the heap for billions more years) and cap it at the level of hominids, is ridiculous and probably downright wrong.
  • It’s important to understand that if we step beyond human intelligence, we step willingly into the unknown, where our fettered little values and ideas are in no way guaranteed to hold true or keep us happy and safe.

Yes, we might influence the initial trajectory of AGI or transhumanism in it’s earliest forms, and yes, this influence in morally important and could easily be argued to be the most significant thing we do as a species (i.e. determine how to jumpstart the intelligence that will eventually populate the galaxy).

But once a superintelligence gets as far beyond us as we are beyond chimpanzees (who differ from us by about 1% of their genes, by the way), we are handing off the baton of truth, of values, and of everything else.

I believe that, over time, this hand-off will be inevitable, and that we should focus on intergovernmental collaboration to determine the best way to facilitate that transition, when the time is best. This might be within the next 100 years, or the next 1000 years, I won’t speculate on timelines here (though many experts have their own timelines, see: When Will We Reach the Singularity? – A Timeline Consensus from AI Researchers).

We should do this carefully, with great caution, and calibrate the best (insomuch as we can estimate) kind of transition we can, minimizing dangers and maximizing upsides. Not too long after that, however, the future won’t be in our hands, and there will be higher judgements of “the good” than we hominids can conceive. With great caution should we hand off this baton, and with no pollyanna expectations of love, joy, or any other human notion.


Header image credit: Farmers’ Almanac

* Emerson himself, on dozens of occasions, expresses (in his characteristically optimistic way) a belief in a kind of eternal love and eternal good. If I take him seriously, if I read his essays (especially Circles) honestly and frankly, then I suspect he was implying a kind of “good” beyond our conception, represented by light overcoming darkness. While he had a kind of dogmatic, steadfast optimism, I believe that he could see that man’s interpretations and feelings were just one rung on the infinite “spires of form.”

Note: Someone will inevitably read this essay and want to accuse me of not feeling or valuing love in my own life. Nothing whatsoever in this essay should lead you to believe that. I can love my friends but still refuse to inflate them. I can enjoy the richness of love in my own life without calling it the peak of all possible spires of all possible qualia.