If it’s All Subjective, it’s Objective
In many essays, I tout the moral mandate for humanity to construct a vastly posthuman intelligence (a worthy successor) that might expand its powers (potentia) and maintain life (the flame)…
Read the Emerj team’s latest coverage on AI use-cases and trends in banking:
Read the Dan Faggella latest coverage on
AI use-cases and trends in AI sector.
In many essays, I tout the moral mandate for humanity to construct a vastly posthuman intelligence (a worthy successor) that might expand its powers (potentia) and maintain life (the flame)…
People who fear AGI destroying humanity often fear that AGI will not share human values. People who advocate for AGI soon often believe that AGI will naturally share human values….
Joining us in our third episode of our series AGI Governance on The Trajectory is Stephen Ibaraki, Founder of the UN ITU AI for Good, and Chairman at REDDS Capital….
In 2025, it is no longer intellectually honest to completely shun the idea of artificial general intelligence (AGI) or AGI risk. Yet still, in Dec 2024 (the time of this…
Joining us in our second episode of our series AGI Governance on The Trajectory is Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit…
Any honest AGI thinkers are frank about the fact that we can’t possibly predict all of the actions or ideas from a posthuman intelligence vastly beyond ourselves. While it seems…
Sebastien Krier of Google DeepMind joins is in our first episode in a brand new AGI Governance series on The Trajectory. Beginning his career studying law at King’s College, Sebastien…
AI alignment typically implies anthropocentric goals: “Ensuring that AGI, no matter how powerful, will serve the interests and intentions of humans, remaining always under our control.” – or – “Ensuring…
When it comes to cognitive architecture, philosophy and AGI, few thinkers are as well-versed as Joscha Bach. Previously Principal AI Engineer, of Cognitive Computing at Intel, today he serves as…
You’ve probably heard of: Well, how about: Taking i-Risk seriously implies planning for a future where: Taking i-Risk seriously implies understanding that i-Risk is an X-Risk. In this article I’ll…