Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
Joscha Bach – Building an AGI to Play the Longest Games [Worthy Successor, Episode 6]
When it comes to cognitive architecture, philosophy and AGI, few thinkers are as well-versed as Joscha Bach. Previously Principal AI Engineer, of Cognitive Computing at Intel, today he serves as…
i-Risk – AGI Indifference (Not Malice) is Enough to Kill Us Off
You’ve probably heard of: Well, how about: Taking i-Risk seriously implies planning for a future where: Taking i-Risk seriously implies understanding that i-Risk is an X-Risk. In this article I’ll…
Jeff Hawkins – Building a Knowledge-Preserving AGI to Live Beyond Us (Worthy Successor, Episode 5)
Before the iPhone there was the Palm Pilot, and before OpenAI there was (and still is) Numenta – both were founded by Jeff Hawkins. Jeff joins us on The Trajectory…
Types of AI Disasters – Uniting and Dividing
If there was a global AI-caused disaster, would that bring about the real possibility of global AI policy and coordination? I’m very much of the belief that a brute arms…
Scott Aaronson – AGI That Evolves Our Values Without Replacing Them (Worthy Successor, Episode 4)
This week Scott Aaronson joins us on The Trajectory for episode 4 of the Worthy Successor series. Scott is a theoretical computer scientist and Schlumberger Centennial Chair of Computer Science…