Learn from the Brightest Minds in Banking AI
Hi I’m
Dan Faggella
I founded Emerj Artificial Intelligence Research, a market research company focused on the ROI of AI among the Fortune 500. I’ve conducted nearly a thousand interviews with Fortune 500 AI leaders (Raytheon, US Bank, etc), AI unicorn startup C-level folks (Dataiku, Domino Data Labs, OakNorth, etc.), and AI researchers (Yoshua Bengio, Nick Bostrom, etc).
I believe:
- Creating posthuman intelligence will be the most morally consequential event we can think of. We should aim to ensure that this final creation is a worthy successor.
- Moralizing AGI governance and innovation (calling some “bad” and others “good”) is disingenuous. All players are selfish. We should focus squarely and with good faith on the incentives of the players involved in order to find a way forward for humanity, and intelligence itself.
Present focus:
- Growing Emerj.
- Putting the realpolitik of AGI and the posthuman transition on blast with the Trajectory.
Stay in touch:
Twitter / LinkedIn / Trajectory newsletter / AI in Business Podcast
Other:
- Literature, esp. Plutarch, Emerson, Montaigne
- Classical architecture, and history.
- Εξασκώ τα ελληνικά μου, αλλά είναι ακόμα μέτρια.
Latest articles and podcasts
5 Reasons to Discuss the Worthy Successor Now
When I discuss potential posthuman trajectories, and the need to eventually let go of the hominid form and allow potentia to bloom, people will often ask: “Why talk about posthuman…
Why No One Talks About AGI Risk
AGI risk seems lower than it is because the people who know that AGI risk exists are almost all incentivized against talking about it openly. Broadly, there are two groups…
The Business of Value Itself – How We Should Steward the Future of Life
We can basically map business survival logic to civilizational survival logic, and in this article I’ll argue that we should. In both the case of stewarding financial resources and moral…
Yi Zeng – Exploring ‘Virtue’ and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]
This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing…
Potentia and Potestas: Achieving The Goldilocks Zone of AGI Governance
Some AI thinkers and funders believe that any governance is a net negative for innovation, and signals dictatorship or oligarchy. Arguments for this position include: In this article I’ll argue…