Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
AGI is What Money Wishes it Was
It’s no wonder all the money is flooding into AGI. It will be no mystery when even more of the money in the world is hurled into explicit building or…
The Financial Singularity – Why AGI Attracts All Capital
The hypothesis of the Financial Singularity: At some point in the 21st century, over 50% of all capital in first world economies will be allocated directly or indirectly to either…
Max Tegmark – The Lynchpin Factors to Achieving AGI Governance [The Trajectory Series 4: AI Safety Connect, Episode 1]
This is an interview with Max Tegmark, MIT professor, Founder of the Future of Life Institute, and author of Life 3.0. This interview was recorded on-site at AI Safety Connect…
Moving Beyond the Current Limited AGI Alignment Dialogue
The present AGI alignment dialogue rests on a handful of shaky premises: Premises 2-5 assume a completely anthropocentric worldview, where moral value beyond the boundaries of the hominid form are…
Ultimate Retirement – An Ideal Future for Obsolete Humans
At some point, AGI or non-biological intelligence will be accomplishing most of what there is to be done in the universe, and humanity won’t be contributing meaningfully to almost anything…