If it’s All Subjective, it’s Objective

In many essays, I tout the moral mandate for humanity to construct a vastly posthuman intelligence (a worthy successor) that might expand its powers (potentia) and maintain life (the flame) into the multiverse (note: I advocate we do this slowly and carefully, with coordination, not a brute arms race).

Common objections to these statements involve the claim that my ideas of morality are subjective and arbitrary, and that someone else’s vision (say, of a continued “normal” human [in 2020s terms] life, or a world where AGI is never created and humans slowly expand to mars and the moon) is just as valid a destination for the trajectory of intelligence.

This comment on a Reddit thread is indicative of what I hear everyday in my conversations on X and in the real world:

The problem of assigning a ‘purpose’ to post-human intelligent agents is not dissimilar to identifying purpose in our own lives.

We, as a species, have stumbled around this issue for ages without finding a definitive satisfactory answer. I do not see a satisfactory answer to this question come our way either.

The argument is:

  • SINCE: Any kind of desired goal for the trajectory of life is subjective.
  • IT FOLLOWS THAT: All desired goals for the trajectory of life are equally valid or invalid.

But I suspect this is wrong.

Here’s how I’d frame it:

  • IF: Something is “subjective.”
  • THEN: It must be subjectively judged by a living and sentient agent.
  • THUS: Of all the possible desired goals for the trajectory of life, only those that include subjective and living agents are even “subjective” in the first place.

In this way, saying “it’s all subjective” than it comes to the future trajectory of intelligence admits that it is in one way objective-ish

Futures that maximize the continuance of life and sentience are better across the total set of all subjective experiences.

So, I am arguing that we should aim to bring about entities more capable of staying alive (more potentia to behoove their conatus), and more conscious, because under the total set of “subjective” (by which people think they mean “arbitrary”) futures, only those which continue to have sentience and life can be judged or experienced at all.

Trajectories of life that fetter life’s expansion and survival are just (objectively?) worse, even if (or because) “everything is subjective.” We ought accept our eventual attenuation, and embrace a path forward that ensures that we create and/or turn into whatever entities that might keep the flame burning.

The Flame & the Torch (1)

FAQs and Objections:

So, you are saying nothing is subjective?

No.

The “best” flavor of ice cream is subjective.

Whether Thomas Cole is a better painter than Gérôme is subjective.

But if the “best” trajectory of life is all subjective, then it is actually objectively better when it results in futures more likely to have living, sentience life into the far future. And for this reason I argue that the eternal hominid kingdom is a fettered and morally wrong path to take for the trajectory of life.

Human beings are both alive and sentient. We are certainly the best carriers of the flame of life today – why shouldn’t we just carry it as human beings? Or, why can’t we gradually biologically evolve or technologically evolve, very slowly over time – instead of some kind of leap to other forms?

At the time of this writing (Dec 22, 2024), I would agree completely that humans are the best torch-bearers of life and sentience that we know of, and that humanity should not rush recklessly beyond our present form and risk losing all that earth-life has bubbled up.

That said, if higher, greater forms of life might be created (forms of life with greater potentia, greater conscious depth or range), I would argue that it is our mandate to – as safely as possible – move towards the creation of these entities.

All things are in flux, all forms are in flux, and marriage to a single torch is scorn for the flame itself (which demands change). As Richard Sutton mentioned on The Trajectory, nature is a dynamic system which beckons life to become whatever it must become – and a more capable, more dynamic form of life which can understand and survive in nature.

So you just think AGI is going to automatically be both alive and conscious? Like, we just birth the sand god and let it run free?

No.

Under the sun and moon, there are few humans who have ever argued the opposite of this position as vehemently as I have.

I believe that:

  • There is no way to hold onto an eternal hominid kingdom (transhumanism and AI are coming, potentia is expanding and we must deal with this), so we should plan for how we handle this important but necessary transition to potentially higher forms of life, and
  • Most AGI-like things that we arbitrarily hurl into the world are likely to be unworthy successors (they might sputter out after a wild streak of dangerous activity… leaving the universe with less living stuff than came before then, not more).

For all I know it may take many more years of research to understand consciousness or agency in machines before we feel confident that they would spread the flame of life beyond us. For all I know consciousness is relegated eternally to bio-life (I don’t believe it is, but who knows, maybe it will be). We may discover that we can’t be sure about machine sentience, and so humanity decides (assuming we can coordinate well enough to do so) to carefully plumb the depths of brain-computer interface before we build too much post-human intelligence in machines.

But by one way or another, I hold that – even if all trajectories of life are “subjective” – we should somewhat objectively hold ourselves accountable to making sure potentia blooms well beyond us.

Why this worthy successor idea? Why your ideas around global coordination / the prevention of an arms race? Might life and sentience better proliferate through other paths?

There are probably plenty of other pathways to keeping life’s flame burning that I haven’t touched on, and so I’m not at all saying that AGI in the exact way I’m discussing (in worthy successor) is “the way.”

There might be arguments for open sourced AGI and neurotech that end up being better than my ideas of coordination and governance.

They might be criteria of what an ideal posthuman intelligence has as traits – and ways of measuring or testing for said traits – that are beyond anything I’ve ever written.

I’m totally open to that – fostering the dialogue is what I’m here to do. That, and put my (not perfect) ideas on the table.

PS: I had ChatGPT put my logical suppositions together into a formal logic presentation of the argument above, which was a fun experiment. Maybe you can do the same with your refutation if you have one.

Header image credit: YesVedanta