Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
Connor Leahy – Slamming the Breaks on the AGI Arms Race [AGI Governance, Episode 5]
Joining us in our fifth episode of our series AGI Governance on The Trajectory is Connor Leahy, Founder and CEO of Conjecture. This is the most in-depth take I’ve ever…
Andrea Miotti – A Human-First AI Future [AGI Governance, Episode 4]
Joining us in our fourth episode of our series AGI Governance on The Trajectory is Andrea Miotti, Founder and Executive Director of ControlAI. He is also one of the main…
If it’s All Subjective, it’s Objective
In many essays, I tout the moral mandate for humanity to construct a vastly posthuman intelligence (a worthy successor) that might expand its powers (potentia) and maintain life (the flame)…
Theories of AGI “Values”
People who fear AGI destroying humanity often fear that AGI will not share human values. People who advocate for AGI soon often believe that AGI will naturally share human values….
Stephen Ibaraki – The Beginning of AGI Global Coordination [AGI Governance, Episode 3]
Joining us in our third episode of our series AGI Governance on The Trajectory is Stephen Ibaraki, Founder of the UN ITU AI for Good, and Chairman at REDDS Capital….