I get categorized incorrectly on X pretty frequently. Sometimes I’m an “AI doomer” or a “misanthrope who wants robots to take over,” etc.
In order to clarify my position I wanted to summarize my organizing idea. A single tweet is a tiny slice of a larger set of ideas. Below are the core tenets of what I currently believe (it’s all subject to change), with references to my more complete thoughts on each point.
So if you read a tweet and think “Wait… does that mean that Dan wants ___?!”, refer to this page and get a sense of what I really intend, and my reasoning behind it. I’m not saying you’ll agree with my reasoning (quite the opposite), or that it’s even right – but you can identify more directly what it is you disagree or agree with.
My cause, or organizing idea, could best be summarized in a handful of bullets:
- 1 – Life / potentia probably matters. Life (the flame, potentia) seems to be the morally relevant stuff. Life itself (non-dead matter, especially things with agency and consciousness) seems morally valuable both because (a) it can experience things (pain-pleasure / awareness) and (b) because it might conjure new powers, new values. A good analogy for life itself (non-dead matter, especially things with agency and consciousness), in the context of the cause, is a flame.
- 2 – AGI seems reasonably likely. It seems reasonably likely that artificial general intelligence (AGI) will be created within the coming 10-30 years. It seems reasonably likely that AGI could be conscious and have agency. Based on point [1] above, this would easily be the most morally relevant human-imaginable thing.
- Related essays and resources:
- 3 – Posthuman intelligences can’t be predicted. An entity with near-infinitely greater cognitive abilities, memory, physical senses and embodiments than humans would almost certainly not be controllable or predictable – and it’s wide space of possible values and actions shouldn’t be expected to always align exactly with human survival, never mind human wellbeing. If life expands [1] beyond hominids as it did on the way to hominids, AGI’s aims will be as incomprehensible to us as ours are to sea snails.
- Related essays and resources:
- 4 – Humanity will eventually attenuate. Given a long enough time horizon, humanity will cease to be. Life is constantly trying to find “the best way to be” in this dynamic system of a universe. The “eternal hominid kingdom” is not a future we get to choose. We should be careful about creating posthuman life, but we should also not be foolish enough to deceive ourselves into thinking that hominids will hold the scepter of power and agency in a billion years.
- 5 – We ought avoid armageddon, and coordinate to ensure that AGI carries (not extinguishes) the flame. We should do all that we can to avoid brutal arms-race dynamics, and any kind of armageddon scenario that could put out the flame of life and potentia itself. Because of [3], we should aim for some level of international coordination (in innovation and/or governance) to some degree in order to ensure that we don’t conjure intelligences that could end life (extinguish the flame). If we are to eventually attenuate [4], we ought shepherd life forward deliberately.
- Related essays and resources:
- **Worthy Successor – The Purpose of AGI (**This is the best summation of the cause in a single essay)
- Dan Hendrycks – Avoiding an AGI Arms Race
- The Posthuman Transition in 7 Phases
- The International Governance of AI – We Unite or We Fight
- The SDGs of Strong AI
- Related essays and resources:
The cause boils down to:
Encouraging international conversation between innovators and regulators in order to (a) avoid AI-related conflict and (b) discern the best way to move towards a posthuman transition and bring about a Worthy Successor.
My writing and speaking are just means to this end.
Emerj is a means to this end.
My life is a means to this end.
If I stray from the purpose listed above I expect to be rightly called out for it. If any of these ideas are treated as unquestionable doctrines or “truths” that don’t warrant more investigation, I also expect to be rightly called out for it. I expect my good friends will do just that under either circumstance.
At the end of the day I’m literally just a human making sense of the existential condition and acting an accordance with an incessant grappling with said condition. If you disagree with the premises of my cause, or my reasoning behind it – that’s just fine, I won’t insist that you agree with me. I’m not your enemy, and my opinions aren’t constructed to offend anyone.