The Grand Trajectory of Intelligence and Sentience
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
Joining us in our third episode of our series AGI Governance on The Trajectory is Stephen Ibaraki, Founder of the UN ITU AI for Good, and Chairman at REDDS Capital.
Drawing on his many years of involvement with ACM, IEEE, and other technical associations, Stephen makes a compelling case for (a) how technical standards and ethics work has already set a good foundation for AI development, and (b) how technical standards (combined with other kinds of governance) might help to coordinate Great Powers to bring about AGI safely.
While too often “AGI governance” is strawmanned as a necessarily a kind of one-world totalitarian government, Stephen clearly lays out other alternative paths (with existing and historical precedents) for helping to align nations and work towards better shared outcomes – there’s a lot of nuance in this episode that most of the AGI governance dialogue misses.
Stephen is one of the very rare people in the intergovernmental / policy space that has a strong grasp of brain-computer interface and other pathways to posthuman intelligence, and he unpacks his thoughts on how AGI and BCI might develop and be managed in parallel.
I hope you enjoy this conversation with Stephen:
Below, I’ll summarize Stephen’s main points from each of the four sections of our interview.
9 out of 10.
Stephen Ibaraki estimates we are only 6-7 years away from achieving AGI, which means risks such as the potential for AGI to develop goals and agency beyond human control are coming nearer and nearer, potentially leading to catastrophic outcomes like the extinction of humanity.
He highlighted the need for global coordination and governance to prevent an “arms race” dynamic between nations or companies developing AGI.
Stephen believes that we should establish common principles, standards, and frameworks for the safe and responsible development of AGI. This could draw on the work of organizations like the ACM, IEEE, and UNESCO.
Developing operational tools, benchmarks, and processes to monitor the progress of AGI development can help detect potential problematic trajectories early on. Stephen believes we need to ensure transparency and collaboration between different stakeholders – including governments, corporations, scientific/technical organizations, and the public – to align incentives and prevent an uncontrolled “arms race” dynamic.
Fostering international cooperation and coordination to establish a global framework should be a priority, rather than having fragmented or conflicting national approaches.
Stephen proposed a multi-stakeholder collaboration: The governance approach would involve bringing together diverse stakeholders – including governments, corporations, academia, and civil society groups. This multi-stakeholder model is seen as crucial to align incentives and ensure a balanced approach.
The overall aim would be to create a global, collaborative framework to steer the development of AGI in a positive direction, drawing on the expertise and participation of diverse stakeholders. Pragmatic, operational tools would be a key part of this approach.The overall aim would be to create a global, collaborative framework to steer the development of AGI in a positive direction, drawing on the expertise and participation of diverse stakeholders. Pragmatic, operational tools would be a key part of this approach.
The key is for both innovators and regulators to proactively collaborate and build upon existing technical work to shape the governance of transformative AI capabilities in a responsible manner. This will require an adaptive, multi-stakeholder approach.
For Innovators:
For Regulators:
…
The “Grand Trajectory” refers to the direction of the development of intelligence and sentience itself. If the following two hypotheses are true: The moral worth of an entity can be…
The human race has two critical questions to answer in the 21st century: 1. What is a beneficial transition beyond humanity? 2. How do we get there from here…
As human beings we are propelled by our varied hopes, our lives are in many ways driven by them. On the day-to-day, we hope to make the sale, to avoid…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…
Episode 2 of The Trajectory is with the Co-Founder of Skype, a prolific funder of AI safety research and member of the United Nations Secretary General’s AI Council – Jaan…
Episode 3 of The Trajectory is with the CEO of SingularityNET, and AGI researcher for many decades, Ben Goertzel. The interview is Episode 3 in The Trajectory’s first series AGI…