AGI Terminology Overview: Artificial General Intelligence – Superintelligence – Artilect – Strong AI

While Nick Bostrom’s book Superintelligence (and its promotion from the likes of Bill Gates and Elon Musk) put the topic of post-human intelligence on the map, there have been a variety of terms used to refer to post-human intelligence over the years – all referring roughly to the same idea: Machines with a greater degree of intelligence and capability than human beings.

I realize that I’ve used a variety of these terms over the years on my blog, and have rarely addressed their direct origins, or the thinkers behind them. From John Searle to Ben Goertzel, here’s a run-down of popular terms referring to post-human intelligence:

AGI Terminology

Artificial General Intelligence (AGI)

Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.

It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies. Some researchers refer to Artificial general intelligence as “strong AI”, “full AI” or as the ability of a machine to perform “general intelligent action”; others reserve “strong AI” for machines capable of experiencing consciousness.

…AGI describes research that aims to create machines capable of general intelligent action. The term was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. [Source: Wikipedia]

Shane Legg is a cofounder of DeepMind Technologies, bought by Google in 2014. Shane remains the company’s Chief Scientist.

Ben Goertzel is founder at OpenCog, SinularityNet, and other AI-related projects. He remains one of the best-known figures in the field of AGI, from a combination of his intellectual contributions and his unique public personality. I’ve interviewed Ben on a number of occasions (2013, 2018).

Mark Gubrud is an Adjunct Assistant Professor at University of North Carolina. This position is listed on his LinkedIn profile as unpaid. Mark keeps a blog, but has not been a major contributor to the global AGI conversation since his apparent coining of the term in the late 90’s.


By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain – for example, you can’t have real-time conversation with “the scientific community”. [Source: Nick Bostrom, 1997]

Beginning his academic teaching career at Yale, Bostrom is gone on to found the Future of Humanity Institute at Oxford University in 2005 and is probably the best-recognized name in the global AGI conversation today. FHI has gained global prominence in recent years, and has spawned a massive growth in similar academic institutions, from the Leverhulme Centre for the Future of Intelligence to the Stanford Institute for Human-Centered Artificial Intelligence. My 2015 interview with Bostrom is still online.


This paper claims that the “species dominance” issue will dominate our global politics later this century. Humanity will be bitterly divided over the question whether to build godlike, massively intelligent machines, called “artilects” (artificial intellects) which with 21st century technologies will have mental capacities trillions of trillions of times above the human level.

Humanity will split into 3 major camps, the “Cosmists” (in favor of building artilects), the “Terrans” (opposed to building artilects), and the “Cyborgs” (who want to become artilects themselves by adding components to their own human brains). A major “artilect war” between the Cosmists and the Terrans, late in the 21st century will kill not millions but billions of people. [Source: Hugo de Garis, 2008]

Hugo de Garis has been writing and publishing about AGI (his term “artilect”) since the 90’s. He was a professor at Xiamen University until his 2010 retirement. de Garis has predicted since the 2000’s that humanity likely going to come to a massive global conflict around the topic of whether or not to build post-human AI (what he calls the “Artilect War”). I consider his ideas (about the artilect war, and about the forces that may lead to global governance) prophetic, and am disappointed that he has largely been absent from the global AGI conversation in the last five years as the topic has gone mainstream.

Strong AI

The Chinese room argument is a thought experiment of John Searle (1980a) and associated (1984) derivation. It is one of the best known and widely credited counters to claims of artificial intelligence (AI)—that is, to claims that computers do or at least can (someday might) think.

According to Searle’s original presentation, the argument is based on two key claims: brains cause minds and syntax doesn’t suffice for semantics. Its target is what Searle dubs “strong AI.” According to strong AI, Searle says, “the computer is not merely a tool in the study of the mind, rather the appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states” (1980a, p. 417). Searle contrasts strong AI with “weak AI.”

According to weak AI, computers just simulate thought, their seeming understanding isn’t real understanding (just as-if), their seeming calculation is only as-if calculation, etc. Nevertheless, computer simulation is useful for studying the mind (as for studying the weather and other things). [Source: Internet Encyclopedia of Philosophy]

John Searle is an American philosopher best known for his Chinese Room argument against post-human artificial intelligence. Searle’s presence in the modern AGI conversation is negligible, but his Chinese Room argument is still debated.


Header image credit: New York Times