A Partial Inquiry on Fulfillment Beyond Humanity
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In 2025, it is no longer intellectually honest to completely shun the idea of artificial general intelligence (AGI) or AGI risk.
Yet still, in Dec 2024 (the time of this writing), I hear statements like:
People make these statements not as hypotheses, not as opinions, but as statements of fact.
In 2015, there were a handful of thinkers throwing around ideas about AGI, and most of the eminent scientists in the field didn’t think it was even a remote possibility within their lifetimes.
Back in 2015, shooting down someone talking about AGI as “just a confused person, or stupid person, who doesn’t understand the science” might have been reasonable. After all, it wasn’t mainstream, and the major scientists in the field didn’t really believe in it.
But today things are different:
Could all these signals be misguided?
Could Bengio, Hinton (and now even LeCun!) be mistaken?
Could the military leadership of the US and China be wholly mistaken in believing that AGI is achievable in the coming decade (or even two decades)?
Yes, they could all be mistaken.
Every one of them. The researchers, the governments, the AGI labs themselves – all could be set up to fail continuously at achieving AGI for hundreds of years – maybe forever.
I’m not saying: “They’re definitely right that AGI is a near-term possibility.”
But I will say two things:
- It is now intellectually dishonest to scoff at artificial general intelligence (AGI) as a near-term possibility.
- Shutting down conversation about something as impactful as AGI is irresponsible – frank and open dialogue about growing AI capabilities is necessary in order to navigate the future of humanity.
In 2025, saying “I think AGI won’t happen anytime soon because XYZ” seems like a reasonable position to hold. A good start to an honest debate.
Saying “AGI will obviously never happen and this is stupid to talk about” is flat-out ignoring the trends in the field and the opinions of its most eminent thinkers.
There is much to debate and discuss, and there are trends and breakthroughs and technical / ecomonic variables that will impact the possibility of AGI, and we should discuss them all openly, and suss out how AI might develop, and how it might impact humanity.
Read it all, watch it all, on both sides of the issues. From Bengio’s arguments to take AGI risk seriously, to Narrow Path, to Andrew Ng’s arguments for why AGI isn’t happening soon, to whatever else might give you new ideas.
The days of honestly cutting off talk of AGI as “ridiculous” are no more.
Dishonest shut-downs of this discussion will only do harm – and frank dialogue and open debate gives us the best chance of arriving at the best future (AGI or not).
As humans, learning often feels good, food often tastes good, novelty brings joy to life, living by values that we set brings order to our consciousness, and besides very few…
In the very first part of this 2007 article, Nick Bostram of the Future of Humanity Institute at Oxford writes: Traditionally, the future of humanity has been a topic for…
(NOTE: This article was first drafted in 2012, and is among my oldest. While I still agree with some of the ideas mentioned here are still interesting to me, my…
I am still of the belief that two of the most important conversations we can have with regards to the future of humanity and of sentient life in general are…
Will intelligent machines have morals? Will the world and it’s sentient technologies agree on any kind of ethical “code?” Will democracy still reign when intelligent technologies run essentially all aspects…
In this short thought experiment, I’ll aim to explore the place of “intention” and “motivation” in a transhuman future, and discuss some of the pros and cons seriously. Speculations around…