The cause is the highest moral aim that I could conceive of – and at age 24 I decided to dedicate my life entirely to the cause. My personal writing, charitable donations, and business endeavors (notably Emerj) are focused 100% on pursuing progress in the cause.
On this page, I provide a brief outline of the cause itself, as well as some background information on how I arrived at the idea, and a set of “frequently asked questions” about it.
The Cause, Defined:
“Uniting the world in determining and moving towards a beneficial trajectory for sentience and intelligence itself.”
The premise of the cause is relatively simple:
- Most humans can agree that things which are conscious (i.e. sentient, self-aware) actually “matter” on a moral scale
- By extension, it is reasonable to suppose that the richness and depth of sentience correlates to the moral value of an entity. So, a hornet or a cricket likely has less total sentient depth than a rabbit, which in turn has less sentient depth than a human. The relative value of the lives of these creatures are treated accordingly
- In the coming decades (could be 2-4 decades, could be 7-10 decades), it is reasonably likely that humans will:
- Enhance their own intelligence and sentience via cognitive enhancements (enhancing our emotions, our memory, our control over technology, etc)
- Create consciousness (i.e. sentience, self-awareness) within machines, by determining and replicating consciousness itself
- It is reasonable to suspect that technological progress in neuroscience and computer science cannot and will not be entirely stopped, even by united government efforts to do so
- If consciousness is what “matters” on a moral scale, then it follows that:
- (Note: I use consciousness because people understand the term, but I mean potentia, which is broader – read: Potentia – The Highest Moral Goal for Humanity)
- The creation and expansion of sentience itself is potentially the most morally important
- If this is the case, then it also follows that:
- A united human effort to determine a beneficial (rather than a tragic) trajectory of sentience (Roughly outlined in my SDGs of Strong AI post from 2019)
- If this is the case, then it also follows that:
I’ve occasionally worded the cause curtly as “what’s after people”, because that’s essentially what it implies. Just as skyscrapers, oil painting, and democracy were not created by chimpanzees, the Milky Way will not be populated by homo sapiens. Further, more advanced, and vastly more intelligent entities will make decisions based on vastly more complex criterion than our current rough sense of “morality” ever could.
If the progression of technology cannot be stopped, then the best humanity can do is:
- Take a deep interest in the ethical and social concerns of future intelligence, in it’s current and future implications
- Guide the development of these technologies in ways that are more likely to create more sentient good (the opposite of suffering), and less sentient harm (suffering, and extinction/extinguishing of life)
It isn’t clear just how to guide these technologies.
In my opinion, the management of this technology transition will likely involve:
- A broad interest in the future of sentience and intelligence, among the leaders of industry and government, and among citizens around the globe
- A global, united steering organization, comprising leaders and research teams from around the world
- Ethical oversight and technological development goals
- A global, united transparency organization, involving government security efforts around the world
- Preventing the development or use of unsafe technologies, limiting the spread or use of potentially dangerous technologies
Allowing neurotechnologies (enhanced human beings) and advanced AI (strong artificial intelligence) to be developed in a competitive “arms race” between companies and governments seems to be a recipe for massive conflict, and it’s not a path that I currently believe to have the best odds of yielding the best future results for future sentient life (both human and post-human).
I’ll be updating this document periodically as the details about the cause unfold, but the general premise (“Uniting the world in determining and moving towards a beneficial trajectory for sentience and intelligence itself”) remains.
The Origin of the Cause:
- 2011: While studying the neuroscience and psychology of skill development in graduate school at UPENN, I began hearing this academic “whispers in the breeze” about something called “machine learning”, a science based in my field (cognitive science), but intended to extend those principles into computation. I had journaled about the possibilities of machines becoming conscious (sentient), and some artificial intelligence experts were positing that machine learning might be critical in achieving this goal
- 2012: After much mulling over the implications of burgeoning neurotechnologies (Brown’s Braingate, among others), and learning more about the developments in machine learning, I can to the firm conclusion that: “The most important activity of humanity will be determining what future sentience we create beyond ourselves.” I decided promptly that I would concern myself exclusively with this objective for the rest of my life
- 2013: I started the predecessor to Emerj AI Research, mainly to explore the ethical concerns of AI and neurotechnologies, and to interview experts in AI, philosophy, and psychology. With the cause in mind but without a clear business model for Emerj, I sold my first business (a martial arts academy that I founded as an undergrad), and started an eCommerce firm based on my UPENN grad school studies.
- 2014: Small speaking engagement at New England universities expanded into TEDx talks and more formal presentations on AI and ethics
- 2017: My eCommerce company got to $2,000,000 in revenues and I was able to sell the business for over $1,000,000, giving me ample funds to pursue Emerj full time without having to sacrifice Emerj equity to investors who might not share my moral vision
- 2018: My first speaking engagements for INTERPOL, The World Bank, and the United Nations, about near-term AI risks and use-cases.
- 2019: I presented my world on generative AI at United Nations Headquarters, including a deepfake video of the head of the UN’s division focused on policing (full video here)
- 2023: The Trajectory is launched, a newsletter and podcast dedicated entirely to the realpolitik of the posthuman transition, including interviews with DeepMind, Yoshua Bengio, and others.
- 2024: Who knows?
I use the language “the cause” as a bit of a nod to Emerson. He uses the term to refer to an organizing purpose, and his writings helped me arrive at a clearer conception of the cause itself.
Q: Isn’t protecting and sustaining human life the most important moral task?
A: Certainly human life is valuable, and ensuring the well-being of humans (and other self-aware biological life) is extremely important. Given a long enough time horizon, there may, in fact, be more morally valuable entities than today’s human beings (just as there are more morally valuable creatures today – ourselves included – than there were 500,000,000 years ago), and concern for that great “trajectory of sentience” is unimaginable morally consequential. Humans should be neither neglected, nor held as the eternally highest benchmark of morally worthy life (note: Here’s what I mean in terms of “moral value”).
Q: Is any of this “trajectory of consciousness” even possible given the pace of technology? Isn’t it more like a million years away?
A: It’s possible that 3,000,000 years from now, humanity (as it is today) will still be the highest intelligence in the known universe. What a sad state of affairs that would be. We’ve done our best to poll many artificial intelligence researchers and experts about the development of artificial superintelligence, and about a transhuman transition (most of these polls – such as our AI risk survey – can be found online at Emerj.com), and many researchers are of the belief that we may live through a “takeoff” of intelligence within our lifetimes. It should be noted that the cause is – in my opinion – still of paramount import, even if it’s not viable within the next few hundred years. The consequences of “the grand trajectory of intelligence itself” are too overwhelmingly huge to take out of focus for our species.
Q: How will humans come together to solve this problem?
A: Fortunately, more and more people are asking this question every day (vastly more people than when I first began asking it in 2012). There is no clear-cut answer, but my aim is to explore and encourage solutions that are an alternative to “arms race” intelligence development.
Q: All this talk about consciousness and suffering – are you a utilitarian?
A: Not entirely. This topic goes into a lot more depth, but I usually explain my position on “ethical theory” this way:
- At present, the basic premise of utilitarianism (that actions should be weighed by their total net effect on creating pleasure or suffering to sentient entities) seems to be the best moral approach in many, many circumstances. It is unfortunate that “utilitarian calculus” (determining the impact of one’s actions across all sentience, over time and into the future) is so doggone hard to measure. Nevertheless, in general, I fund the premise of utilitarianism to be an attractive “lens” to view moral decision-making most of the time.
- I also believe that future intelligence will develop more robust moral theories than utilitarianism. “Utilitarianism” in theory and in action is limited by the hardware and software of the human brain. Being attached to such a theory is ridiculous. Nobody wonders about the best ideas from Chimpanzee morality, and how humans can live out those ideas. We don’t wonder about this because it’s blatantly evident that Chimpanzees cannot think about consequences or concepts like human beings can, and whatever their ideas might be, they wouldn’t be likely to fit in our present context of civilization, nor would they account for our deeper and more robust understanding of the world. It seems likely that future superintelligent entities will quickly develop a much more robust and nuanced ethical theory than we humans ever could, and that arriving at this future and higher morality – though fraught with danger and by no means a guarantee for our own wellbeing – as about as good a goal as we could ever establish in terms of pursuing “the good” itself.
- As much as I wish it wasn’t the case, I believe that morality may always be contextual and subjective, and that the discovery of some “final” or “ground truth” moral idea is not remotely possible. It’s likely that – even when artificial superintelligences exist – the best they’ll be able to do (morally) will be to discern moral ideas that fit their higher knowledge of the world, and their future context and needs. I wish I could say that the universe had some favored plan or course of action for us, it is a nice thought and I’m sure it would bring me tremendous comfort. If I were a betting man, however, I’d suspect that we’re flailing in a cold and indifferent universe and we will work our way to higher (but still arbitrary) moral theories as we work our way to higher intelligence and higher scientific understanding.
- I dedicated an entire TEDx talk to this topic.
Q: What if – even in a hundred million years – there is never any intelligence on this planet more developed and intelligent than homo sapiens?
A: Given a long enough time horizon I consider this to be extremely unlikely unless we decide to ruin the planet or nuke each other incessantly. There are a great many reasons to consider this to be unlikely. Higher and higher forms of intelligence have always developed, and we don’t seem to have any good reason to suspect that this won’t continue further – either through evolutionary selection or the volitional efforts of humans.