Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
Scoffing at AGI isn’t Intellectually Honest Anymore
In 2025, it is no longer intellectually honest to completely shun the idea of artificial general intelligence (AGI) or AGI risk. Yet still, in Dec 2024 (the time of this…
Mike Brown – AI Cooperation and Competition Between the US and China [AGI Governance, Episode 2]
Joining us in our second episode of our series AGI Governance on The Trajectory is Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit…
Kindness and Intelligence in AGI
Any honest AGI thinkers are frank about the fact that we can’t possibly predict all of the actions or ideas from a posthuman intelligence vastly beyond ourselves. While it seems…
Sébastien Krier – Keeping a Pulse on AGI’s Takeoff [AGI Governance, Episode 1]
Sebastien Krier of Google DeepMind joins is in our first episode in a brand new AGI Governance series on The Trajectory. Beginning his career studying law at King’s College, Sebastien…
AGI Alignment – Cosmic vs Anthropocentric
AI alignment typically implies anthropocentric goals: “Ensuring that AGI, no matter how powerful, will serve the interests and intentions of humans, remaining always under our control.” – or – “Ensuring…