The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This new installment of the Worthy Successor series is an interview with Vincent C. Müller, Alexander von Humboldt Professor for Ethics of AI at the University of Erlangen-Nürnberg.
In this conversation, Vincent discusses how artificial intelligence has changed in recent years, particularly the development of large language models and the increasing role of deep learning. He explains that these systems performed at a level that was not widely expected and that their generative capabilities across language, images, and video have had a significant impact on the field.
Vincent begins from a destabilizing claim: that what we consider intelligent behavior is closely tied to learning and accumulated experience. He explains that much of what appears to be intelligence comes from patterns developed over time and that what we describe as intelligent behavior often depends on prior learning rather than something entirely separate from it.
We talk about how deep learning has overtaken other areas of AI research and how investment in the field has increased significantly. We examine the limitations Vincent sees in current systems, including challenges related to interaction with the physical world and coordination between different capabilities. We also examine the question Vincent raises about the direction of AI development and what goals it should serve.
The interview is our twenty-eighth installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this interesting conversation with Vincent:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with Vincent, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
Vincent explicitly identifies successful interaction with the physical world as a major gap in current AI systems. He emphasizes that intelligence is not just about processing inputs passively, but about coordinating perception and action in real time. This includes the ability to recover from unexpected changes and continue functioning effectively in dynamic environments.
This matters for a “worthy successor” in Vincent’s framing because he directly ties this capability to replacing human roles in the real world. Without this kind of integrated perception and action, systems remain limited in scope and cannot take on the broader range of activities that humans currently perform.
Vincent distinguishes between systems that are given goals and systems that can reflect on goals. He explains that humans do not simply execute fixed objectives, but continuously evaluate and adjust their priorities depending on context. This includes deciding which goals matter more in a given situation and how to balance competing considerations. This illustrates that intelligent behavior involves ongoing reassessment of goals rather than fixed execution.
Vincent repeatedly emphasizes that the central question for AI is not only what systems can do, but what they are being built for. He states that discussions about AI are too focused on risks and problems, and not enough on defining the desired outcomes or direction for the technology.
He connects this to concrete goals such as improving living conditions, increasing fairness, and supporting human well-being. These are presented not as abstract ideals, but as practical benchmarks for evaluating whether AI development is moving in a desirable direction.
Vincent emphasizes that discussions about AI are too focused on risks and problems, and not enough on defining what the technology is actually meant to achieve. He frames this as a central question for how AI should be developed and evaluated.He presents this shift as necessary to understand whether ongoing work in AI is aligned with any clearly defined purpose.
Vincent refers to broader global goals, noting that “UNESCO has sort of humanity goals,” and points to issues such as access to food and water when discussing what AI development should aim to achieve. He raises the question of whether current technological progress is moving in that direction.He also notes that this kind of question about direction is not being asked often enough in current discussions of AI development.
One of the biggest takeaways for me in this conversation was the shared emphasis on direction. Vincent kept returning to the question of what AI is actually for, and I found myself reinforcing that framing directly. I’ve been asking this same question for years, to what end?, and it was clear throughout the conversation that we’re aligned on the idea that capability alone is not enough without a defined purpose.
At the same time, I found myself pushing Vincent on timelines and the inevitability of progress. My view is that the scale of investment and ambition in AI makes continued breakthroughs extremely likely, whereas Vincent remained more cautious, emphasizing uncertainty and the limits of prediction. That difference in posture stood out less as a disagreement about what matters, and more a difference in how strongly to commit to expectations about what comes next.
I also appreciated Vincent’s clarity around what current systems are still missing. His emphasis on real-world interaction and goal reflection maps closely to how I think about the gaps between today’s systems and anything resembling general intelligence.
Where I found a more subtle divergence was around how to frame the long-term future. Vincent stays closely tied to improving present human conditions-things like access to food and water-while I tend to think in terms of longer arcs of life and persistence. I don’t see those as mutually exclusive, but it does highlight a difference in emphasis between grounding progress in current human needs and thinking about what might extend beyond them.
Overall, what stood out to me most was the sense that we are advancing quickly without asking the most important question often enough. Vincent framed it directly, and I agree: if we don’t define what we’re trying to achieve, then progress becomes directionless. That, more than any specific technical limitation, feels like the core issue raised in this conversation.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research, and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…