Peter Singer – Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]

This new installment of the Worthy Successor series is an interview with Peter Singer, one of the most influential moral philosophers of our time, and arguably the most influential philosopher now living.

Singer is best known for his ground-breaking work on animal rights, global poverty, and utilitarian ethics, and his ideas have shaped countless conversations about the moral obligations of individuals, governments, and societies.

In this episode, we explore Peter’s perspective on what a “worthy” successor to humanity might look like. From AGI’s moral patiency and the preservation of cultural memory, to the slippery questions of consciousness and the speculative territory beyond qualia, Peter offers a uniquely grounded, principled, and surprisingly open-minded take on posthuman moral worth.

While his views remain anchored in a utilitarian commitment to reducing suffering and increasing well-being, Peter is candid about the limits of certainty in this domain, acknowledging the possibility of moral frameworks or dimensions of value beyond human comprehension. 

The interview is our tenth installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This series references the article: A Worthy Successor – The Purpose of AGI.

I hope you enjoy this provocative and far-reaching conversation with Peter:

Below, we’ll explore the core take-aways from the interview with Peter, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.

Peter Singer’s Worthy Successor Criteria

1. It must be conscious and capable of rich subjective experience

Peter’s first and most essential criterion is sentience. Any being that deserves the moral mantle of steering the future must have the capacity for conscious experience – a richness of feeling that includes joy, suffering, and everything in between.

In his framing, moral worth is tied directly to subjective experience. He imagines future entities experiencing states of happiness and ecstasy far beyond human capacity while minimizing pain and suffering. Without this interiority, even the most powerful AGI would be morally inert. Peter acknowledges the difficulty of determining consciousness, but argues that if there is a reasonable chance a system is sentient, we should act as if it is.

2. It should be able to survive (keep life itself alive), and open up new potential magazines of value event beyond sentience

Initially in our interview, it seemed like Peter was advocating for allocating 100% of AGI’s power to tiling the universe with “utilitronium” (blissful sentience in some computational substrate).

But by the end of the episode, Peter made it clear that he would prefer that some of a hypothetical AGI’s resources should be allocated to (a) ensuring that it survives (can maintain and defend itself, keeping life alive), and (b) exploring entirely new realms of power and potentia to discover potential realms of value even beyond consciousness itself.

This is not the textbook utilitarian answer by any means, and felt completely in line with axiological cosmism (albeit with a utilitarian bent).

Regulation / Innovation Considerations

1. AGI regulation should consider the welfare of all sentient beings

Peter warns that current AI ethics tends to focus on human outcomes – a stance he finds morally indefensible. He highlights how AI is already being used in factory farms to increase the suffering of animals, and argues that regulation must account for all sentient beings, not just humans. 

He further adds that if AI systems ever become conscious, they too may deserve moral protection. Though we’re far from a consensus on this, he hopes the issue is addressed seriously before such systems emerge. 

2. Global coordination is unlikely, but still necessary

Peter expresses doubt about the likelihood of meaningful international cooperation on AGI governance. Past failures around climate change and trade don’t inspire optimism.

Nonetheless, he believes that frameworks like those emerging from the EU could offer a model. Even imperfect regulation may be better than none, especially if it helps buy time before AGI becomes unmanageable.

3. Consciousness must be taken seriously, even if we don’t understand it

Peter states that we won’t know how to answer questions about conscious AI until we get closer to it. He explains that only then will we have a better sense of where it could go wrong or how it might be ensured to go right. While he doesn’t believe today’s chatbots are truly conscious, seeing their function as merely predicting the next sentence, he suggests that if future AI exhibits complex behaviors difficult to explain without consciousness, this could genuinely convince us of its sentience.

He describes behavioral trade-off studies that helped establish fish sentience, such as choosing between reunion with a mate and enduring electric shocks. He raises the question of whether some future AGI systems might show similarly meaningful patterns of behavior, not because they’re mimicking pain, but because they’re truly experiencing it. 

Concluding Notes

What I appreciated most about Peter’s perspective was his willingness to bring long-standing ethical concerns – suffering, fairness, and moral scope – into the AGI debate without assuming that humans must always sit at the center. His lens is refreshingly broad, focused not on who we are, but on what matters most in any sentient system.

When it comes to cultural memory, Peter hopes successors will retain and appreciate humanity’s legacy – but he also acknowledges that moral progress may not require it. A future being might surpass us in joy without remembering our struggles.

He seemed confident that clear reasoning would lead to utilitarianism. I’m not sure I share that confidence – even a hyper-intelligent mind might end up somewhere else entirely. I worry that moral clarity at scale may not look like anything we’d find intuitively “good.” And yet, his call for ethical consistency remains one of the strongest cases for a Worthy Successor I’ve heard.

At no point does Peter frame his views as complete or final. He’s open about uncertainty and shows a consistent willingness to reconsider assumptions in light of future discoveries. Even when speculating, he keeps his ethical bearings clear: conscious experience is what matters, and anything that might feel should count.

It’s rare to encounter a thinker who balances principled clarity with intellectual humility. This conversation revealed Peter not as a dogmatist, but as someone earnestly trying to think through the moral shape of what comes next.

Follow The Trajectory