By Peter Voss | 17 January 2017
Any AGI must at a minimum possess a core set of cognitive abilities — as a simple description of human intelligence will confirm. The skills must be implementable in a practical way — i.e. interacting with incomplete, potentially contradictory and noisy environments using finite computing and time resources.
All of these abilities must be able to operate in real-time on (at least) 3-dimensional, dynamic (temporal) data, as well as stimulus-response (causal) relationships. Operations must be scalar, not just binary (degrees of fit, and certainty).
Here is a basic list:
- Recognize existing patterns/ entities, even with partial and/ or noisy input. “What is this?”
- Determining what existing categories a pattern belongs to (and how well it fits). “What kind of thing is this?”
- Predict the remainder and/or continuations of a given partial pattern (predict). “What is next?”
- Be able to learn new patterns/ entities, and to be able to categorize them.
- Focus/ selection/ importance: Selecting pertinent information at the input level as well as during learning and cognition (see Salience below).
- Be able to learn new skills — both mental and ‘physical’.
- To be able to learn via a wide range of modes, including: unsupervised, supervised, exploration, instruction, etc.
- Support integrated long-term memory — i.e. its ‘knowledge base’ must be immediately available to all other abilities.
- Support integrated working memory (recent history that provides current context, and may or may not be remembered.)
- Be able to modify all knowledge and skills as new information becomes available.
- Be able to reason (abstractly) using existing context. e.g. “What is the contradiction in what the two salespeople just said?”
While most examples here are language-based, these abilities must also operate in a purely perception-action mode.
Two other noteworthy requirements are understanding and being able to determine salience.
. . .
In addition to the positive checklist, we can also formulate a quick negative reality check:
What does not qualify as AGI?
- It cannot learn new knowledge and skills incrementally and interactively, in real time.
- It cannot deal with incomplete and contradictory information.
- It cannot learn from single instances in real time.
- It cannot accumulate and adapt its knowledge and skills over time.
- It cannot learn unsupervised or autonomously (always needs ‘labels’ or a teacher).
- It assumes unlimited, or (in principle) impractical amounts of computing resources.
- It cannot sense real-world time & space (in some way) and act on it in real time — i.e. if it cannot conceptualize and interact with the world.
. . .
Salience — selecting what is relevant and important to a given context and goal — is an important aspect of intelligent systems.
This comes into play at different levels of cognition:
Firstly, in autonomous data selection on input — what senses and features to process and /or ignore, and what level of importance to assign to them for processing. For example, most animals are wired to pay extra attention to fast moving items in their visual field, and to loud sounds. For AGI we have to assume that much more sensory input will be available than can (or should) reasonably be processed. We must also assume that relevant feature extractors such as edge or shape detection must be prioritized. It seems that some semi-automatic mechanism needs to do this pre-selection. This mechanism should be under overall high-level cognitive control to preset parameters; for example to, say, bias it to focus on changes in color or pitch.
Once input has been appropriately selected and prioritized, pattern matching, categorization, and conceptualization mechanisms need to be selected according to contextual requirements. What matters currently? For example, are we trying to match incoming patterns against each other, or against some internal reference; are we interested in shape or texture patterns; or are we just interested in object collisions?
Higher level goals also need to be selected and prioritized according to salience. What are we trying to achieve right now? What dependencies are there? What is most important in the current context?
Finally, the overall architecture has to allow for consolidation and forgetting. What information or experience should be consolidated? What should be forgotten (or archived)?
AGI need to have mechanisms in place at each of these levels (and probably some others) to evaluate salience and to adjust cognition accordingly.
Reprinted with permission from the author.
— Church and State (@ChurchAndStateN) April 8, 2018
Peter Voss, AGI Innovations inc @ iHuman: The Future of Minds and Machines, SVForum
Peter Voss on Applying Artificial Intelligence to Intelligence Enhancement and Life Extension
MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum)
Erica: Man Made
Be sure to ‘like’ us on Facebook