Kindness and Intelligence in AGI
Any honest AGI thinkers are frank about the fact that we can’t possibly predict all of the actions or ideas from a posthuman intelligence vastly beyond ourselves. While it seems…
Read the Dan Faggella latest coverage on
AI use-cases and trends in AI sector.
Any honest AGI thinkers are frank about the fact that we can’t possibly predict all of the actions or ideas from a posthuman intelligence vastly beyond ourselves. While it seems…
You’ve probably heard of: Well, how about: Taking i-Risk seriously implies planning for a future where: Taking i-Risk seriously implies understanding that i-Risk is an X-Risk. In this article I’ll…
This post is not intended as an article, but as an opportunity to define a term I often refer to in talks and essays. For more context on these essays,…
Nick Bostrom – former Founding Director of the Future of Humanity Institute at Oxford – joins this week on The Trajectory. Bostrom has plenty of formal accolades, including being the…
Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories: Given…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…
Episode 1 of The Trajectory is with none other than the Turing Award winner, MILA Scientific Director, ML godfather, and (very recently) AI safety advocate, Yoshua Bengio. My first interview…