By Peter Voss | 16 April 2017
[You can, of course (and usually do), go from AGI-ish designs to narrow implementations.]
AGI has certain essential requirements—such as real-time learning (incl. one-shot, small data, and unsupervised), reasoning & planning, memory & context, focus & selection, metacognition, etc. A system that is not real-time adaptive, has no memory, or that cannot reason just isn’t very smart.
Narrow AI (NAI) utilizes the designer’s or programmer’s intelligence directly to best solve a particular well-defined problem, for which it is optimized.
There are good reasons for using this approach: It is usually the fastest and cheapest way to solve any given problem; you can use any technique, trick, or architecture available without having to worry about generality or compatibility; you can rapidly hard-code missing capabilities; and you don’t have to worry much about use-cases outside of current specifications.
This analysis holds true even for ‘self-learning’ approaches like machine learning/ deep learning (ML/DL)—here one still manually selects and tags data, choses a particular architecture, and optimizes parameters for the given application. Incidentally, as powerful and useful as these methods are, they have characteristics that are in many ways the opposite of what general intelligence demands.
For all of the advantages of Narrow AI approaches in the short-term, it is generally understood that our real long-term goal is AGI—systems that can think, learn, adapt, and reason as well as, or better than, humans.
There are numerous reasons that NAI is highly unlikely to directly lead to AGI; some of which have already been alluded to:
- NAI systems come in a very wide range of architectures, technologies, algorithms, and data representations, which would typically be quite incompatible, making them impossible to combine — never mind synergistically supporting each other, as required by AGI.
- Narrow applications usually have a lot of hard-coded logic or parameters, plus hand-crafted, pre-trained data sets which don’t lend themselves to real-time adaptive learning. At a more general level, these systems are not inherently designed to interactively generalize or adapt their knowledge and skills.
- Typical AI designs only require certain selected competencies, while totally missing others that are crucial to AGI. For example, few systems inherently support comprehensive dynamic short- and long-term memory, real-time one-shot and transfer learning, or integrated reasoning or metacognition. These missing features can’t just be ‘bolted on’!
- Commercial pressures exerts significant pressure in the direction of NAI — it takes a very specific understanding, vision and dedication to keep pushing in the opposite direction. For the narrow kinds of problems currently being pursued (a chicken and egg), a directly human-designed, hard-coded problem solver will always outperform a generalist (a dedicated Go machine will beat an equally powerful AGI) — quiet apart from the fact that AGI technology is still quite immature at this stage.
- Very few development teams today have or are guided by a viable theory of general intelligence. For that reason they are not likely to stumble onto the right design by accident. In fact…
- Almost all current AI work utilize architectures and approaches incompatible with AGI. For example, the popular (and generally appropriate) engineering preference for highly modular, disparate components is opposite to the requirements of AGI. Also, the lack of utilizing cognitive architectures is a barrier to advanced intelligence.
Reprinted with permission from the author.
— Church and State (@ChurchAndStateN) April 8, 2018
Peter Voss, AGI Innovations inc @ iHuman: The Future of Minds and Machines, SVForum
Part 1: Inventor, futurist, engineer Peter Voss
Erica: Man Made
Be sure to ‘like’ us on Facebook