Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
-
Cosmic Moral Aspirations
The guiding ideals of most humans are naturally oriented towards the benefits of, well, human beings. “We should create technologies that improve the lives of current and future human beings.”–“We…
-
Moral and Psychological Development “Trend Towards the Cosmic”
Over the course of writing my Cosmic Moral Aspirations (CMA) essay, I game across a number of moral and psychological development models that exhibited the kind of “trend towards the…
-
What’s Your p(Bloom)?
Enough “What’s your p(Doom)?” How about asking: “What’s your p(Bloom)?” Asking someone for their “p(bloom)” means asking them: “What is the likelihood that AGI will cause all sentient and autopoietic…
-
Stuart Russell – Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)
Joining us in our ninth episode of our AGI Governance series on The Trajectory is Stuart Russell, Professor of Computer Science at UC Berkeley and author of Human Compatible. He…
-
Craig Mundie – Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)
Joining us in our eighth episode of our AGI Governance series on The Trajectory is Craig Mundie, former Chief Research and Strategy Officer at Microsoft and longtime advisor on the…