Theories of AGI “Values”
People who fear AGI destroying humanity often fear that AGI will not share human values. People who advocate for AGI soon often believe that AGI will naturally share human values….
Read the Dan Faggella latest coverage on
AI use-cases and trends in AI sector.
People who fear AGI destroying humanity often fear that AGI will not share human values. People who advocate for AGI soon often believe that AGI will naturally share human values….
Any honest AGI thinkers are frank about the fact that we can’t possibly predict all of the actions or ideas from a posthuman intelligence vastly beyond ourselves. While it seems…
You’ve probably heard of: Well, how about: Taking i-Risk seriously implies planning for a future where: Taking i-Risk seriously implies understanding that i-Risk is an X-Risk. In this article I’ll…
This post is not intended as an article, but as an opportunity to define a term I often refer to in talks and essays. For more context on these essays,…
Nick Bostrom – former Founding Director of the Future of Humanity Institute at Oxford – joins this week on The Trajectory. Bostrom has plenty of formal accolades, including being the…
Ask a group of 7-year-olds “What do you want to be when you grow up?” and you might get a variety of answers that fall into the following categories: Given…
The 5 Stages of Grief Model isn’t perfect – but it also seems to overlap well with the experiences of grieving people. The psychological model itself needn’t be perfect –…
I don’t take my ideas all that seriously. Or yours, frankly, though I hope to learn from them. You and I both are morons and neurons. Morons: We each individually…
If you must kill me, reader, do it without malice or distain. I’m just a man hurled into a world without any inherent meaning, born to die – like you….
If you claim to be fighting for positive AGI outcomes and you’re calling Altman “selfish” or “dangerous” or “evil,” you’re part of the problem. Here’s the TL;DR on this article:…