Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
Becoming Cosmically Informed and Cosmically Aligned
People often read about the worthy successor, or generally axiologically cosmic ideas, and ask: “Okay, Dan, but how to I actually DO something about this?” As it turns out there…
Our Final Imperatives
Like it or not we humans share the fate of all forms (individuals, species, substrates): To transform or be destroyed. I suspect we have from 15 to 40 years before…
Short Human Timelines – Keep the Flame Going When Our Torch Goes Out
Given a long enough time horizon, all things (individuals, species, forms) die out completely or transform into something else. Lucretius and the second law of thermodynamics concur here. In this…
Joshua Clymer – Where Human Civilization Might Crumble First (Early Experience of AGI – Episode 2)
This is an interview with Joshua Clymer, a technical AI safety researcher at Redwood Research. Before that, he researched AI threat models and developed evaluations for self-improvement capabilities at METR….
Pedestal Cope – Assuming Human “Plot Armor”
In business, blindly assuming you’ll be profitable is a recipe for bankruptcy. In the outdoors, blindly assuming you’ll find water and shelter is a recipe for death. And yet, many…