The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
This episode of the Worthy Successor series features Weaver Weinbaum, an independent researcher and founder of NUNET. Weaver’s work sits at the intersection of philosophy, engineering, and the study of intelligence. His research focuses on intelligence, consciousness, and decentralized compute systems intended to democratize access to AI infrastructure.
Weaver was first recommended to the show many years ago by AGI researcher Ben Goertzel. His work on open-ended intelligence has influenced discussions about how intelligence should be structured beyond simple optimization systems.
In this episode, we explore Weaver’s idea that intelligence should not merely optimize for predefined goals. Instead, intelligence should expand the space of possible goals and values. This perspective sits at the center of his concept of open-ended intelligence.
We also discuss how future intelligences might think about care for humans, other forms of life, and potentially other forms of sentience that humanity does not yet understand.
The interview is our 24th installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.
This series references the article: A Worthy Successor – The Purpose of AGI.
I hope you enjoy this unique conversation with Weaver:
Subscribe for the latest episodes of The Trajectory:
Below, we’ll explore the core take-aways from the interview with Weaver, including his list of Worthy Successor criteria and his recommendations for innovators and regulators who want to achieve one.
Weaver describes the expansion of conscious attention as a foundational trait of future intelligence. Attention is not free – it requires energy and computation – which means living systems historically evolved to focus attention narrowly on what was necessary for survival.
Expanding the sphere of attention means acknowledging the existence of other beings and systems that may fall outside immediate self-interest. For Weaver, the simple act of recognizing the existence of the “other” changes how intelligence approaches decision-making and interaction.
This expanded awareness does not remove trade-offs. Living systems still maintain themselves by exporting entropy to their environment, meaning that sustaining one system can impose costs on others.
Weaver’s second criterion is the expansion of care – the cultivation of responsibility toward the well-being of sentient beings.
He describes care as a proactive guardianship that applies both to individuals and collectives. Care requires intentional investment of energy, resources, and computation, and reflects a stance toward the world rather than a fixed rule.
However, Weaver emphasizes that care does not eliminate conflicts between beings. Situations will still arise where caring for one system conflicts with caring for another.
Weaver’s third criterion is the expansion of freedom. He frames freedom as the removal of constraints that limit the development and expression of intelligence.
These impediments include ignorance, misunderstanding, and intolerance – factors that restrict the ability of beings to understand their environment and interact constructively with others. In Weaver’s framework, freedom is closely tied to the expansion of intelligence itself.
At the same time, Weaver emphasizes that expanding freedom and expanding care can exist in tension. Increasing complexity, computation, and technological capability can increase the entropy exported to surrounding systems, placing pressure on the broader environment.
For Weaver, navigating the tension between freedom and care is not a problem to eliminate but the central ethical space within which intelligence operates.
Weaver emphasizes the importance of democratizing access to AI infrastructure and computational resources. In his view, intelligent systems and the computational resources required to run them should not remain concentrated within a small number of organizations.
Expanding access to compute allows more individuals and communities to experiment with intelligence technologies and explore new ideas. He notes that broader access does not imply identical outcomes or usage across individuals. Some people will develop more advanced applications than others, but wider access expands the overall space of experimentation and discovery.
In Weaver’s view, expanding access to intelligent resources helps cultivate both freedom and care within technological development, allowing a wider range of actors to participate in shaping the trajectory of intelligence.
Weaver argues that scientific research is increasingly shaped by economic and political incentives, which can narrow the scope of exploration. In his view, these pressures influence which ideas are pursued and which research directions receive support.
Because funding systems often prioritize commercially viable outcomes, researchers may struggle to explore ideas whose value is not immediately obvious. Weaver suggests that scientific knowledge and technological development should be more openly shared in order to expand the range of possible discoveries.
In his view, broader access to knowledge and research tools can expand the intellectual search space and enable more experimentation.
Weaver argues that education systems should move beyond a narrow focus on knowledge transfer and preparing individuals for predefined economic roles.
He suggests that education should cultivate curiosity and encourage exploration rather than directing students only toward specific profitable skills. For Weaver, encouraging curiosity and experimentation helps individuals participate more actively in expanding knowledge and intelligence.
Education that cultivates curiosity and adaptability may better prepare future generations to navigate accelerating technological and social change.
What struck me most about this conversation with Weaver is the emphasis on expanding the scope of intelligence itself. In many discussions about advanced AI, the focus is often on optimizing for particular goals or solving particular problems. Weaver instead points toward a broader framing: intelligence as an expansion of attention and capability. The idea that a worthy successor would continually widen the sphere of what it can perceive, influence, and understand feels like an important lens through which to think about the future of intelligence.
The tension he describes between freedom and care is also particularly compelling. On one side is the expansion of powers – new capabilities, new ways of interacting with the natural world, new forms of intelligence emerging over time. On the other side is the coordination required when those expanding powers exist within an ecosystem of other forms of life. Weaver does not treat this tension as something that disappears with greater intelligence. Instead, he frames it as a permanent feature of the ethical terrain that advanced intelligence must navigate.
That framing stands in contrast to a view I encounter often – the idea that a sufficiently intelligent system will simply converge toward universal harmony, effortlessly balancing the needs of every organism that exists. Weaver’s perspective is more grounded than that. The expansion of power and the coordination of life are likely to remain in dynamic tension, just as they have throughout the history of biological evolution.
Another point that stood out is his suggestion that interacting with AI may help humanity understand itself more deeply. If developing artificial intelligence becomes a way of studying intelligence itself, then the project of AI development is not only technological but philosophical as well. Understanding our own cognition, motivations, and limitations may prove essential if we hope to shape the trajectory of intelligence responsibly.
It is possible that the next phase of this trajectory is not a sudden leap into radically alien forms of intelligence, but a more capable and coordinated civilization first. Greater access to knowledge, more adaptive governance, and deeper understanding of intelligence itself may be necessary stepping stones before larger transitions unfold.
I found this conversation with Weaver both thoughtful and clarifying. His work sits at an interesting intersection of philosophy and engineering, and I’m grateful that Ben Goertzel originally pointed me in his direction many years ago. I hope you found the discussion as thought-provoking as I did. There are many more Worthy Successor conversations ahead, and I’m grateful you joined us for this one.
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research, and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…