Preventing Suffering After Mind Uploading – A Commentary on Yampolskiy and Ziesche

If mind uploading becomes possible, how can we prevent uploaded minds from suffering?

While uploaded minds might be able to experience a hyper-intense range of super-bliss, it is obviously possible that the opposite could also happen under the wrong conditions (or malicious motives) and hell – not heaven – could be constructed for post-human minds.

I’ve explored many of these themes on this blog in the past, but I received another reason to poke a bit deeper into the opportunities and methods of preventing suffering – when Soenke Ziesche reached out to me about his latest academic article with Dr. Roman Yampolskiy*, which cited a couple of my own articles (including Lotus Eaters and World Eaters), and those of other authors I respect.

Their article in the Journal of Evolution and Technology is called Do No Harm Policy for Minds in Other Substrates, and you can read it here: For readers interested in this article, I recommend reading the full paper – as it is only ~3,900 words in length.

In this article, I’ll explore a number of quotes and statements from this No Harm paper – diving into specific topics related to the ethics of mind uploading, and sharing my own brief thoughts and ideas.

Could it be possible to create such computational substrates without NPCs? In theory, computational substrates for the enhancement and transfer of human minds devoid of any NPCs are possible, but it then becomes very questionable whether our desirability assumption is fulfilled. Yampolskiy (2019) has proposed Individual Simulated Universes (ISUs) in order for human minds to be happy, and, perhaps with a very few exceptions, it is hard to imagine human minds being enduringly happy without any social interaction with other minds.

I don’t believe that mind uploaded humans will necessarily care about “relationships” as humans today do. Just as superintelligent AI may be conscious, but not like us – drastically post-human minds cannot be suspected to have the same core drives and sorrows as present-day homo sapiens. I suspect many humans will live happily in their own personal universes (a topic Yampolskiy has covered in a previous paper, which I have also commented on in a past article).

Even if other agents do exist, they probably don’t need to be sentient to be interactive with real, sentience mind-uploaded humans, they could just be rich simulations.

If these additional agents must be conscious, we might suspect that they could be programmed to be exceedingly blissful – thought this article contains a number of reasons why that may not be the case. 

Therefore, we face a challenge: given the desirability, feasibility and inevitability of ISUs, how can the suffering of other sentient beings be avoided, or at least reduced, in computational substrates for the enhancement and transfer of human minds?

I’m not completely sure if human happiness, reduced human suffering, or maximized utility (regular old utilitarianism) is the objective here, but the consequences of the end goal – the definition of “the good” here – is important to consider. Indeed many moral aims do not ultimately imply tending to the cares of humanity. A few examples below:

  1. Maximize wellbeing, minimize suffering — In this case, get rid of the human consciousnesses and just make utilitronium (an argument I post as a thought experiment in my 2017 TEDx: Can AI Make the World a Better Place).
  2. Maximize knowledge of nature and the world — Get rid of little human consciousness-bubbles, and pour efforts into a singleton that can unlock all the secrets of nature (exploring “the Good” itself – something AGI might help us do).
  3. Sustain human life for its own sake and no other reason — This is outright selfish, but we should expect most humans will opt for this route for that reason. Will a god-like AI really value this? Does nature herself give any species a “free pass”, or do the strong survive and determine the future?

In the long term, the best humanity can likely hope for is to be digitized and digested in a mind-upload scenario. There is the potential for a “great merger” with some kind of superintelligent AI, but it seems unlikely that mortal human minds will have much to contribute to such an entity, and digesting our computational substrate seems most likely.

The authors don’t talk about how long a happy mind-upload scenario might last, or how it might end, but I’d be interested in their thoughts. I also believe that picking an “end game” – a moral aim to adhere to in launching the trajectory of intelligence (either through AGI or transhumanism or mind uploads, etc) – is critical, and deserves discourse. Back to the paper…

Subroutines — Given the lack of evidence, it is challenging to develop a typology of subroutines that relates to suffering in computational substrates for the enhancement and transfer of human minds. Here we can distinguish whether the subroutines are executed within the mind of the transferred human or in other parts of the computational substrate. The latter require further specification as those subroutines that do not constitute NPCs (since NPCs have already been discussed)…

…In a recent paper, Bostrom and his collaborators formulated the desideratum “that maltreatment of sentient digital minds is avoided or minimized” (Bostrom, Dafoe, and Flynn 2018, 18), and elsewhere Bostrom has encouraged addressing this issue early “while the artificial agents we are able to create are still primitive” (Bostrom 2018, 2). As a follow-up, we recently termed this field of research “AI Welfare Science” (Ziesche and Yampolskiy 2019). 

There should be an overt focus on drawing this line that much of this is speculation – but I believe it is tremendously important that humanity be aware that if simulations become self-aware, the moral consequences are immeasurable. Most humans will have a very hard time empathizing for something so abstract (unless it is anthropomorphized in some way), and vigilance in trying not to do hard as we explore the world of AGI and simulations is something that I hope to see a continued emphasis on.

Until we can explore and find a higher good, mastering utilitarian calculous seems to be what we should try to optimize for. How pitifully horrible we are at it.

I’ll state outright that an in-depth exploration of “subroutines” is outside my technical sphere of knowledge.

Since exploring qualia is a difficult problem (Chalmers 1995), we emphasize quantitative and objective physiological/computational indicators.

I can agree on that much, but indeed all else is speculation. It is a great shame, maybe the great shame, that more isn’t known – I hope we can learn more about the origin and workings of consciousness in the decade to come.

All in all, MUCH of this will be create-able by having non-sentient NPCs. If there must be an ecosystem of varying delicate NPCs who can suffer in various ways, we might as well just go with utilitronium and not deal with maintaining messing human suffering-inducing ecosystems. Baking in “empathy” to uploaded human minds to interact with NPCs is so inefficient and silly, let’s just not deal with this altogether please.

Seems like the whole point is this:

  • Have human live in their own bubble where they can’t hurt anyone else. I agree with this fully (read: Epitome of Freedom).
  • If the humans must interact with “agents” at all (and I have argued earlier in this article that they don’t), then make those agents not suffer at all.
  • If other sentient “stuff” is to be created, make it blissful.

Overall: Just make sure that we can measure and optimize utilitarian calculous across all matter, and not permit for bubbles of suffering in the sentient ecosystem… but also selfishly maintain our “selves” because we’re selfish and amoral little creatures and we want to survive even if it is at odds with other principles (utility monster).

On the aggregate, I agree with these points. The devil will be in the details in terms of “detecting” suffering, as the authors rightly point out.

In an earlier paper (Ziesche and Yampolskiy 2019), we proposed that the overarching goal should be suffering-abolitionism as elaborated by Pearce (2007), yet transferred to digital environments and ISUs in particular, which Pearce did not incorporate. Since suffering-abolitionism has not yet succeeded, and since the prevention of suffering has a moral urgency, we have proposed the policy sketched above.

It is unclear if this should be the highest goal. Might be:

  • Maximizing overall gradients of bliss (overall utilitarian calculous, not just suffering reduction).
  • The discovery of conceptions of “the good” that are better than, and beyond, utilitarianism and pain-pleasure axis (quote my “finding the good” article here).

While I respect David Pearce‘s work (as evidenced by my interview with him, and dozens of references to his work on this blog), I’m not convinced that suffering-abolitionism is indeed the highest moral goal for humanity, or for post-human intelligence – who will almost certainly have different, maybe much much better, than our own. I’m not against suffering abolition – but I am not convinced of its place as the North Star in that should guide the trajectory of intelligence and sentience.


* In the past I’ve interviewed both of the authors respectively, about AI-related topics unrelated to mind uploading (Roman, Soenke).