Yi Zeng – Exploring ‘Virtue’ and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]

This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades).

Over a year ago when I asked Jaan Tallinn “who within the UN advisory group on AI has good ideas about AGI and governance?” he mentioned Yi immediately.

And Jaan was right. Yi’s grounding in AI research and Eastern philosophy made for a combination of future visions unlike anything we’ve ever covered on the show before.

In this episode Yi shares his insights on:

  • How we might teach AGI about “virtue” from the ground up (Yi’s ideas here are anything I’ve heard before), and build AGI to be a kind of “moral teacher” – drawing on the ideas of the Neo-Confucian philosopher Wang Yangming.
  • Why he believes we should pursue brain augmentation and building AGI minds together, carefully exploring the state-space of minds (Yi’s own idea about brain-inspired design is very different from that of past guests like Jeff Hawkins).
  • The permutations of posthuman intelligence that might arrive (biological, digital, hybrid), and where humanity’s role might be in a much wider mix of powerful entities in the decades ahead.

This interview was recorded on-site at AI Safety Connect 2025, a side event from the AI Action Summit in Paris. I hope you enjoy this conversation with Yi Zeng:

Takeaways and Concluding Thoughts

The Importance of East-West AGI Governance Dialogue: One of the most meaningful aspects of my conversation with Professor Yi Zeng was the chance to bridge Eastern and Western perspectives on AGI. In an era where geopolitical tension threatens to overshadow collaboration, dialogue across cultures is not just valuable—it’s essential.

Track 2 dialogue with Chinese thinkers, particularly on the subject of artificial general intelligence, is a critical frontier for those who hope to prevent conflict and shape a shared vision of what “good” could mean in a post-AGI world.

Cultural Perspectives on Virtue and “The Good”: What struck me early in our discussion was Yi’s grounding in Chinese philosophical traditions—especially Confucianism—and how those traditions shape his conception of AGI’s role in the future.

His invocation of a “grand moral teacher” reminded me Confucius’s own reverence for the semi-mythical Yellow Emperor. It made me reflect on how much of my own moral framing is a product of Western thought and cultural inheritance. How much of my own thinking is downstream from Christianity, the Greeks, and the cultural heritage of early America?

I won’t lie, hearing words like “ancestors” when discussing AGI futures feels wholly irrelevant to me – but that’s merely because of my upbringing and culture. It might be safe to say that most of the world does consider the treatment of ancestors as morally relevant for human and posthuman intelligences – but its a topic I might never touch on if my interviewees are entirely from the US or Europe.

On the one hand it struck me almost as a bit of blind spot that we all have – our visions of the future to be undergirded by values and ideas we’re not even aware of. But without a doubt its another good reason for encouraging many perspectives on issues related to AGI governance and alignment, because different thinkers and cultures bring different perspectives to the table.

Wariness of AGI Optimism: Toward the back half of the conversation, I noticed a quiet but persistent optimism in Yi’s vision for AGI. He seems to believe AGI could act as a kind of caretaker or moral guide for humanity—a notion I find difficult to accept.

While I agree with Yi on the likelihood of a proliferation of digital and post-biological life forms, I’m more skeptical about AGI maintaining a warm, protective relationship with its human origins. Once intelligence begins expanding across the vast state space of possible minds, it’s hard to imagine the biological hominid form remaining central—or even relevant.

That said, hearing Yi’s thoughts and unpacking his vision was one of the highlights of my trip to Paris, and his intellectual openness made it a pleasure to explore these high-level ideas together.

Although we diverged in our views on AGI’s ultimate disposition toward humanity, our conversation reaffirmed the value of philosophical diversity in shaping the future.

I look forward to engaging further with Yi, especially around the conditions and decisions that might lead us toward more benevolent AGI trajectories—a theme we continue to explore in our Worthy Successor series.