Why aren’t more people working on AGI?

By Peter Voss | 26 December 2016

Erica is a semi-autonomous android, the product of the most funded scientific project in Japan. (Credit: YouTube / screengrab)

This question came up again at a recent debate on the merits of advanced AI.

Here’s a list of some of the most common reasons, plus an analysis of why there seems to be so little progress in AGI development:

Why are so few researchers pursuing AGI?

Many researchers believe: Human-level AGI is not possible because…

  • Biological beings (especially humans) have something special (a soul?) that cannot be replicated in machines
  • Human intelligence requires consciousness, which in turn arises from weird quantum processes that cannot be implemented in computers
  • They tried in their youth (20–40 years ago) and failed — now, conclude that it can’t be done

Others have fundamental problems with ‘general intelligence’…

  • Believe that it is inherently an invalid concept (‘g’ in psychology has become quite unpopular — one could even say ‘politically incorrect’)
  • Overall intelligence is just a collection of specialized skills, and we just need to somehow engineer or create all individual ones

A very common objection is that the time is not ripe…

  • AGI can’t be achieved within their lifetime, so there is no point
  • Hardware is not nearly powerful enough

Most researchers believe that ‘nobody knows how to build AGI’ because…

  • “We don’t understand intelligence/ intuition/ consciousness/ etc.”
  • They haven’t seen or heard of any viable theories of AGI
  • They aren’t even looking for that possibility (because of many of the other reasons listed here)

We should rather copy the brain in some way to achieve human-level AI…

  • Reverse engineer the brain with custom chips—one area at a time
  • Simulate a human brain in a supercomputer
  • Build specialized hardware that copies brain neural structure
  • Grow biological brains in a vat

Don’t think that AGI is all that important because…

  • Narrow AI already exceeds humans abilities in many areas
  • They don’t believe that self-improving (Seed AI) is viable
  • Don’t share the vision of AGI’s benefits, or our need for it

Simply don’t have the ‘patience’ for such a long-term project

  • Can get quicker results (financial and other) pursuing Narrow AI

Quite a few people think that AGI is highly undesirable because…

  • Lead to massive unemployment, or is generally not socially acceptable.
  • We don’t know how to make it safe, and will likely destroy us

Finally, there are those would love to work on AGI, but…

  • Don’t know how to do it, and see no viable model
  • Are researchers who will get little academic respect/ support/ funding
  • Can’t get their AGI efforts funded

All of the above combine to create a dynamic where AGI is not fashionable, further reducing the number of people drawn into it!

Why is there so little progress in (workable) AGI models and systems?

See above: Why are so few researchers pursuing Real AI?

The field is dramatically underfunded

Most theories of general intelligence, and approaches to AGI, are quite poor:

  • Poor epistemology: understanding the nature of knowledge and certainty, how it is acquired and validated, the importance of context, etc.
  • Poor understanding of intelligence: knowledge vs adaptive learning, static vs dynamic, offline vs interactive, big data vs instance learning, etc.
  • A poor understanding of other key concepts involved: grounding, understanding, concepts, emotions, volition, consciousness, etc.
  • A lack of logical integration of connectionist, statistical, logic, and other AI techniques and insights
  • Not appreciating the importance of a comprehensive cognitive architectures, and looking for an overly simple, ‘silver-bullet’ approach
  • Overly modular designs, incompatible with deep cognitive integration
  • Focusing on only one, or a few, aspects of intelligence
  • Focusing exclusively on the wrong level: either too high (at logical reasoning) or too low (perception/ action)
  • Too much focus on copying the brain — i.e. biological feasibility
  • Using physical robots prematurely (i.e. now)
  • A lack of commonality/ compatibility between various AGI efforts
  • Performance expectations are set too high for any specific functionality: early general intelligence is not likely to be competitive with narrow AI

Of course, the (perceived) lack of progress feeds the lack of interest and people working in the field… a non-virtuous cycle.

Reprinted with permission from the author.

Peter VossPeter Voss is an entrepreneur, inventor, engineer, scientist and AI researcher. In 2001 he (co-) coined the term ‘AGI’ (Artificial General Intelligence), and has been working towards achieving high-level AI since then. He also has a keen interest in the inter-relationship between philosophy, psychology, ethics, futurism, and computer science, and frequently writes and talks on these topics.

Peter Voss, AGI Innovations inc @ iHuman: The Future of Minds and Machines, SVForum

Erica: Man Made

Part 1: Inventor, futurist, engineer Peter Voss

Peter Voss on Applying Artificial Intelligence to Intelligence Enhancement and Life Extension

Be sure to ‘like’ us on Facebook


  1. Perhaps the Cyc project provided a starting point to examine what is needed in terms of inference engines, and a peek at knowledge representation. We already have systems that can learn from data, to go to the next level, we need a universe of discourse for such a machine, where it can roam and make observations, like in the natural world. We already have systems that can learn and write their own rules. I have often thought we needed a robust machine language, for expression and communication to humans … humans could learn the language too. Never explored the theoretical underpinnings of such a language – ambiguity is likely unavoidable, leading to such things as "brittleness". Solving this would be a tremendous step. Then too, knowledge itself needs to be stored and represented … the real world consists of objects and relationships, and we "think" with rules. How to structure all of this so it can be accessed and used to make inferences? Logic is not absolute either, some conditions indicate possible occurrences statistically. If the universe of discourse is "flat", then the automaton would have to in principle look at ALL of it, if solving a problem, for example. This is so inefficient, there needs to be ways to find what you need when you want it. The internet is somewhat a model for this already – a modern miracle. The internet is like a living database, giving you back what you wish … but it is not terribly efficient, there must be much better ways. At the top level, the language must be able to pose questions, or state goals, things for the automaton to do. Perhaps an automaton in such a universe of discourse, could have as a goal, things like to "learn about learning". If not, then the exact inferencing mechanisms need to be hard coded. Cyc identified many types of inferencing mechanisms as being needed.


Please enter your comment!
Please enter your name here