Michael Levin – Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]

This new installment of the Worthy Successor series is an interview with Dr. Michael Levin, a pioneering developmental biologist at Tufts University, and possibly the most important thinker now living when it comes to the nature of intelligence itself.

Michael is one of the few people who have opened my mind up to the state space of possible minds in a more concrete way. In terms of Michael’s body of ideas, I have tremendous respect for what he has to say.

In this episode we go deep on Michael’s process philosophy of what intelligence is (or might be), and how new paradigms of intelligence might be tested and explored through AGI development and experimental biology. We also dive deep into Michael’s ideas about what kind of Worthy Successor we should build, and what such an entity might do or prioritize.

This is probably the longest outro I have ever recorded on any episode – because I had so much to say about his takes on AGI. I was frankly surprised about his ultimately optimistic assumptions about human survival and “caring” or “one-ness” being potential attractor states for superintelligence. Read the conclusion section of this article to get the distilled version of my takes (and frankly, my points of serious confusion) about Michael’s position.

The interview is our seventh installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This series references the article: A Worthy Successor – The Purpose of AGI.

I aim to chat with him again to explore these themes. I hope you enjoy this unique conversation with Michael:

Below, we’ll explore the core take-aways from the interview with Michael, including his list of Worthy Successor criteria, and his ideas about how to best leverage governance to improve the likelihood that whatever we create is, in fact, Worthy Successor.

Michael Levin’s Worthy Successor Criteria

1. Expanded compassion and concern for all living beings

Michael emphasizes the need for future intelligences to develop a much broader “cognitive light cone” – the ability to care about and consider the welfare of a vastly larger number of entities, beyond just a small circle of humans.

2. Ability to solve “mundane” problems and challenges

Michael suggests that a mature, posthuman intelligence would have solved the basic problems that currently constrain human existence, such as disease, aging, and resource scarcity. Levin speculates that once the mundane is handled, a worthy successor might dedicate itself to facilitating the continued evolution and expansion of consciousness throughout the universe.

3. Realization of the fundamental oneness or interconnectedness of all beings

Michael suggests that advanced intelligences may come to see the artificial distinctions between individual beings, and recognize a deeper unity or shared essence. He emphasizes that true persistence requires the ability to undergo metamorphosis and change, rather than just maintaining a static form.

Michael stresses the critical importance of developing a deeper, more comprehensive understanding of how intelligence can manifest in unexpected ways, beyond the human model. However, he also acknowledges the significant uncertainty around predicting the goals and behaviors of intelligences that may transcend human-level cognition.

Regulation / Innovation Considerations

1. Pace of AI Development

Michael expresses concern about the arms race between the US and China in developing AGI, and the lack of a mature science of diverse intelligence to guide these efforts. He suggests that slowing down the current pace of technological development may not be an achievable goal, given the momentum behind these efforts.

While he does not think slowing down is feasible, he suggests the possibility of some coordination between entities like the US and China to “steer clear of the visible and invisible ‘tar pits'” – which are potentially harmful developments in the race to AGI.

2. Focusing on rapidly developing a “science of collective intelligence”

Michael advocates for putting more funding and research effort into understanding how intelligence can manifest in unexpected ways, beyond just human-like cognition. He sees this as a critical prerequisite

He argues that we do not actually “create” intelligence, but rather facilitate its emergence. This conceptual shift may have implications for how innovation and regulation are approached, especially with the urgent need to expand scientific understanding of diverse forms of intelligence.

Concluding Notes

I very much enjoyed having the opportunity to interview Michael, and it was cool to be able to look under the hood at what his intuitions are about what kind of intelligences we seem to be egressing. There was a lot that sat strangely with me, which I hope to unpack with Michael in future episodes as well.

It is clear to me that Michael sees life as a process and not as a concept on its own. His ideas are paradigm-shifting, and asks extremely important questions about the nature of intelligence and consciousness. However, I found some of these intuitions to be a bit jarring and challenging to understand.

I appreciate Michael’s ability to focus on testing ideas about consciousness and intelligence instead of just theorizing about them. However, I find that his supposition that future intelligences would see “oneness” with us to be logically disconnected, and I believe future intelligences may have goals that are unrelated to nurturing individual forms of sentience. I was curious about his lack of urgency in addressing the current “AI arms race” as well, given the rapid pace of change within that discussion.

Overall, I find Michael’s ideas to be both incredibly exciting and difficult to fully reconcile. I hope to have more opportunities to unpack his ideas in future conversations, as I found them profound and worthy of deeper exploration.

Follow The Trajectory