Scoffing at AGI isn’t Intellectually Honest Anymore

In 2025, it is no longer intellectually honest to completely shun the idea of artificial general intelligence (AGI) or AGI risk.

Yet still, in Dec 2024 (the time of this writing), I hear statements like:

  • “Pfft, everyone knows AGI isn’t realistic, AI systems will never have agency. There will never be a time when they don’t simply do what humans tell them to do, period.”
  • “Pfft, if you really understood the science you’d stop talking about AGI bullshit. It’s just math, there is no hard takeoff for AGI, it’s never going to get that powerful, it’s impossible.”
  • Everyone knows that scaling these current systems won’t get to AGI. And the number of new breakthroughs we would need to get there might be thousands of years away.”

People make these statements not as hypotheses, not as opinions, but as statements of fact.

In 2015, there were a handful of thinkers throwing around ideas about AGI, and most of the eminent scientists in the field didn’t think it was even a remote possibility within their lifetimes.

Back in 2015, shooting down someone talking about AGI as “just a confused person, or stupid person, who doesn’t understand the science” might have been reasonable. After all, it wasn’t mainstream, and the major scientists in the field didn’t really believe in it.

But today things are different:

  • The famed godfathers of machine learning, Bengio and Hinton, are both vocal about artificial general intelligence being a near-term risk for humanity.
  • In November of 2024, Yann LeCun, the most credible and staunch opponent of fast AI progress (and AGI risk) has state that he believes human level AI is only 5-10 years away.
  • Essentially all of the major lab leaders have signed the Safe AI Statement on AI Risk (Altman, Hassabis, others), and essentially all of them have given lengthy interviews about the real potential of creating something beyond human control.
  • Many of most valuable companies in the world are, unabashedly, racing towards posthuman intelligence (Microsoft, Google, Meta), and the most valuable company in the history of the world at the time of this writing is NVIDIA, an AI hardware firm.
  • The United States is proposing a Manhattan Project for AGI in order to beat China (who is engaged in their own race towards AGI). The two great powers are overtly racing against each other to create posthuman intelligence.
  • The Secretary General of the United Nations has stated many times that he believes loss of AI control / AGI to be a serious risk to humanity.
  • Very real strategies for AGI race dominance (Situational Awareness) and AGI race deterrence (Compendium / etc) are circulating with increased popularity by the month, often by credible scientists close to the field.

Could all these signals be misguided?

Could Bengio, Hinton (and now even LeCun!) be mistaken?

Could the military leadership of the US and China be wholly mistaken in believing that AGI is achievable in the coming decade (or even two decades)?

Yes, they could all be mistaken.

Every one of them. The researchers, the governments, the AGI labs themselves – all could be set up to fail continuously at achieving AGI for hundreds of years – maybe forever.

I’m not saying: “They’re definitely right that AGI is a near-term possibility.”

But I will say two things:

  1. It is now intellectually dishonest to scoff at artificial general intelligence (AGI) as a near-term possibility.
  2. Shutting down conversation about something as impactful as AGI is irresponsible – frank and open dialogue about growing AI capabilities is necessary in order to navigate the future of humanity.

In 2025, saying “I think AGI won’t happen anytime soon because XYZ” seems like a reasonable position to hold. A good start to an honest debate.

Saying “AGI will obviously never happen and this is stupid to talk about” is flat-out ignoring the trends in the field and the opinions of its most eminent thinkers.

There is much to debate and discuss, and there are trends and breakthroughs and technical / ecomonic variables that will impact the possibility of AGI, and we should discuss them all openly, and suss out how AI might develop, and how it might impact humanity.

Read it all, watch it all, on both sides of the issues. From Bengio’s arguments to take AGI risk seriously, to Narrow Path, to Andrew Ng’s arguments for why AGI isn’t happening soon, to whatever else might give you new ideas.

The days of honestly cutting off talk of AGI as “ridiculous” are no more.

Dishonest shut-downs of this discussion will only do harm – and frank dialogue and open debate gives us the best chance of arriving at the best future (AGI or not).