By Alberto Romero | 1 June 2021
AI research keeps reaching new milestones. The deep learning paradigm keeps reaffirming its dominance year after year. It seems safe to assume that neural networks will govern the future of AI. Yet, promises keep falling short. Artificial general intelligence (AGI) doesn’t seem to be near despite what some claim. AI systems are still very dumb and narrow. There seems to be a missing gap between where AI is and where we want it to be.
— Martin Ford (@MFordFuture) September 4, 2020
Are today’s approaches the way to find and close this gap? In this article, I explain why AI systems will need to be embodied, grow up, and live in the world to one day reach intelligence. Enjoy!
Does the mind need a body?
It was René Descartes in his 1641 book Meditations on First Philosophy, who first proposed the idea that the mind and the body are separate substances – what’s called Cartesian dualism or “disembodied intelligence.” Descartes identified the mind with our conscious subjective experience of the world and with the source of our intelligence. He thought that although we didn’t need a body to be intelligent, our mind and body interacted; physical events caused mental events.
Forwarding 300 years we find a very similar idea: the mind is “a computational system that’s physically implemented by neural activity in the brain.” The computational mind was first suggested by Warren McCulloch and Walter Pitts in 1943. Jerry Fodor and Hilary Putnam extended this idea in the following decades into what’s called today the computational theory of the mind.
Building upon it, in 1976 Allen Newell and Herbert A. Simon proposed the physical symbol system hypothesis (PSS hypothesis). It states that “a physical symbol system has the necessary and sufficient means for general intelligent action.” With this hypothesis, we close the circle going back to Descartes, who defended, as Emilia Bratu puts it, that “human understanding is all about forming and manipulating symbolic representations.”
Bringing these ideas together we have:
- The body and the mind are separated.
- The mind is realized by computational brain activity.
- Intelligence appears through symbol manipulation.
It was under this framework that AI first appeared after John MacCarthy proposed it as a field of research on its own in 1956.
The reigning AI paradigm is bodiless
Throughout the last 60 years of AI development, symbolic AI (expert systems) and connectionist AI (neural networks), have indisputably dominated the landscape. First, symbolic AI reigned during the 50s–80s period. Then, with the advent of machine learning and deep learning, neural networks took off to the status they enjoy today.
Geoffrey Hinton transformed the field of artificial intelligence a decade ago with deep learning. Now he’s working on a new imaginary system he envisions as a way to model human perception and intuition in a machine.https://t.co/ZJLny9Mco0
— MIT Technology Review (@techreview) June 14, 2021
Although very different in appearance, both approaches have one important thing in common: They live within the boundaries of Cartesian dualism, the computational theory of mind, and the PSS hypothesis. And hence, both reject the idea that truly intelligent AI needs a body.
Today’s AI, best depicted by deep learning, has forgotten about the body. The most important advances of the decade rely on software-based AI. Systems that can generate text at the human level, beat world champions at chess or Go, test human creativity emulating Shakespeare or Bach, or will drive cars in the future. All are systems that live in the virtual world of a bodiless computer.
Most interest and effort are captured by the idea that artificial general intelligence (AGI) can be realized in software-based systems in a computer. However, there are important limitations to this approach.
Disembodied machines can’t acquire ‘know-how’ knowledge
It was philosopher Hubert Dreyfus who first attacked the notions behind the PSS hypothesis. In his 1972 book What Computers Can’t Do, he highlighted a key difference between human intelligence and early, symbolic AI. He argued that a good part of human knowledge is tacit knowledge – know-how experiential knowledge, such as riding a bike or learning a language – which can’t be adequately transmitted, let alone formalized or codified. Expertise is mostly tacit, Dreyfus stated, and therefore “expert” AI systems could never be truly expert. In the words of Michael Polanyi, “we can know more than we can tell.”
With the advent of connectionist AI and the boom of neural networks, Dreyfus’ arguments apparently became obsolete. Machine learning systems could learn without being explicitly told what or how to learn. Face recognition, which is a great example of tacit knowledge, can be done by these systems. We can recognize the face of our mother among a thousand faces, but we don’t know how we do it. We can’t transmit the how-to knowledge and yet, machine learning systems can recognize faces even better than we do.
However, Ragnar Fjelland, in a defense of Dreyfus’ arguments, stated that not even connectionist AI systems can get actual tacit knowledge. He explains that experiencing the real world is necessary to gain this type of knowledge. AI systems, in contrast, only experience – at best – the oversimplified models of reality we feed them. Machines can achieve expertise within the boundaries of a virtual world, but not more than that. In the words of Fjelland: “As long as computers do not grow up, belong to a culture, and act in the world, they will never acquire human-like intelligence.”
The importance of experiencing the world
We develop our understanding of the world by interacting with our surroundings. An apple isn’t just the green or red light it emits and the smooth tactile and sweet gustatory sensations. We know that an apple costs money. We know it rots eventually if we don’t eat it. We know it hurts if it falls from a tree and hits us in the head, even if it hasn’t ever happened.
We understand what an apple is in all its forms because we can link information to meaning. An AI system can classify apples but it can’t understand why someone would rather eat chocolate. Because AI systems don’t live in the world, they can’t interact with it and therefore they can’t understand it. Professor of Bioengineering at the University of Genoa, Giulio Sandini argues that “to develop something like human intelligence in a machine, the machine has to be able to acquire its own experiences.”
Dreyfus argued that our intelligence derives from the complex relationship between the sensory information we actively perceive and our actions on the world. We don’t passively absorb the world, as AI systems do, we “enact our perceptual experience.” Alva Noë says it best in his book Action in Perception, “perception is not a process in the brain, but a kind of skillful activity of the body as a whole. […] The world is not given to consciousness all at once but is gained gradually by active inquiry and exploration.”
Can #AI become self-conscious without a body? Some would say it can, others would disapprove. Philosopher Hubert Dreyfus argued that, unless embodied, artificial agents cannot deal with the real world: https://t.co/dCRbthB1Jy #artificialintelligence #machinelearning #technology pic.twitter.com/fWFKDzrzjW
— QUALITANCE (@QUALITANCE) October 6, 2020
- We are intelligent because we experience the world.
- Cognition and perception are active processes tied to action.
- We perceive the world through our bodies.
Experiencing the world gives us access to tacit knowledge which leads to expertise, a hallmark of human intelligence. It seems reasonable to assume that machines will need to experience the world to be truly intelligent. The obvious question is: How could we create machines that fulfill these requirements?
The promise of developmental robotics
This recent field of research combines ideas from robotics, artificial intelligence, developmental psychology, and neuroscience. Scholarpedia defines its primary goal as modeling “the development of increasingly complex cognitive processes in natural and artificial systems and to understand how such processes emerge through physical and social interaction.”
Developmental robotics merges robotics and AI but differs from both in two aspects. First, it emphasizes the role of the body and the environment as causal elements giving emergence to cognition. Second, artificial cognitive systems aren’t programmed. They emerge from the initiation and maintenance of a developmental process in which they interact with physical – inanimate objects – and social environments – people or other robots.
Researchers use robots to test their cognitive models because they can interact with the world. Within this paradigm, developmental roboticists could eventually create a robot that grows up in the world like a human child.
Alan Turing already argued in 1950 that building a child’s brain and educating it could be a better approach to create artificial intelligence than building an adult’s brain. It makes sense to follow this path towards AGI because development is the only process we know by which organisms acquire intelligence. It might not be necessary (as connectionists and symbolicists would defend), but it’s reasonable to assume that it’s “mechanistically crucial” to emulate human-like intelligence in machines.
By giving AI cognitive systems a body that can develop and interact with the physical and social worlds we are merging the efforts of traditional AI with the only known instances of true intelligence. It’s at the intersection of AI, robotics, and cognitive sciences that we’ll find the path towards AGI.
Descartes popularized a school of thought on philosophy of mind that has extended its influence to this day. Disembodied approaches to AI have found significant success in the last 60 years, but they’re still way far from achieving human-like intelligence. Developmental robotics could be the answer to the remaining questions.
No one can claim to have found the missing link between today’s AI and AGI, but merging AI, robotics, and the cognitive sciences could bring us closer to the only instance we have of true intelligence: Us.
Reprinted with permission from the author.
Robots & Artificial General Intelligence – How Robotics is Paving The Way for AGI
Beyond AI with Sam Altman and Greg Brockman
The Rise of Artificial General Intelligence – A.G.I
Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds
Be sure to ‘like’ us on Facebook