By Alberto Romero | 25 April 2021
Medium

Machine learning is arguably the most important area of AI. Within machine learning, deep learning (DL) has seen the most success and popularity. It has single-handedly multiplied the interest in the field in recent years.
Governments and big companies have adapted to this new era of DL. It’s present in healthcare, transport, education, commerce, finance, and every other industry. DL-based systems can recognize faces and voices, translate and generate text, create Shakespearean pieces, imitate Bach, beat champions at chess or Dota, and drive cars.
DL is the present of AI. But, is it the future?
Even if current paradigms are proving to be highly useful today, they’re not close to imbuing machines with human-like intelligence. Bigger neural nets trained with more and more data aren’t going to make it.
Let’s take object recognition for instance. Researchers have been using ImageNet as a benchmark database since 2010. Using a DL model based on convolutional neural nets, Alex Krizhevsky and his team won the ImageNet Challenge in 2012. They obliterated their (non-DL) rivals by a +10% error margin – achieving 63.30% top-1 accuracy. Today, the best DL models reach +90% accuracy in the ImageNet benchmark. That’s better than a human.
However, those same models would experience a 40-45% drop in accuracy when classifying real-world images, such as the ones found in ObjectNet, an unbiased object database. These models can classify almost perfectly ImageNet’s clean images but can’t extrapolate to real-world scenarios.
Other examples of the lights and shadows of DL are DeepMind’s milestone systems – AlphaZero and AlphaStar – capable of beating world-class champions at chess/Go or StarCraft, respectively. These feats seem incredible but appearances can be misleading. AlphaZero can win at chess and Go, but it can’t play both at once. “Retraining a model’s connections and responses so that it can win at chess resets any previous experience it had of Go,” says Douglas Heaven in an article for Nature. “From the perspective of a human,” says Chelsea Finn, researcher at Stanford University, “this is kind of ridiculous.”
Why can deep neural networks be easy to fool? https://t.co/nbsXkjWl3n pic.twitter.com/NXpm0d137P
— Adam Kucharski (@AdamJKucharski) October 11, 2019
The above examples illustrate DL’s difficulties with transfer learning – the ability to keep the knowledge learned to solve one problem and apply it to another, related problem. We, humans, are way better at it. Melanie Mitchell, computer science professor at Portland State University, says that “machines often are not able to deal with input that is different from the kind of input they have been trained on.”
Furthermore, DL systems need lots of data and lots of computing power and are still very dumb. In the words of Yoshua Bengio, one of the ‘Godfathers of AI,’ “[machines] need much more data to learn a task than human examples of intelligence, and they still make stupid mistakes.”
Yann LeCun, another DL pioneer, thinks these systems will still be important in the future, but they will need some changes. He argues that supervised learning – The most common type of training method, consisting of using data with labels – will be replaced with self-supervised learning.
“[self-supervised learning] is the idea of learning to represent the world before learning a task. This is what babies and animals do,” he says. “Once we have good representations of the world, learning a task requires few trials and few samples.”
It’s generally accepted that DL will still be around for some time in one form or another.
That’s the main conclusion from Martin Ford’s book Architects of Intelligence. Some of the current challenges of today’s AI are “its application to narrow domains, its overreliance on data, and its limited understanding of the meaning of language.”
In the words of Oren Etzioni, CEO of Allen Institute for Artificial Intelligence: “I think the reality is that deep learning and neural networks are particularly nice tools in our toolbox, but it’s a tool that still leaves us with a number of problems like reasoning, background knowledge, common sense, and many others largely unsolved.”
Other, more critical voices argue that neither DL nor any other current AI paradigm is the way. They say we most likely will never build machines with artificial general intelligence (AGI). Based on the arguments of philosopher Hubert Dreyfus, Ragnar Fjelland, professor at the University of Bergen, said that “as long as computers do not grow up, belong to a culture, and act in the world, they will never acquire human-like intelligence.”
In The Book of Why, Judea Pearl and Dana Mackenzie argue that to create human-like intelligent machines, we’d need to imbue them the ability to answer causal questions of the form: “What happens if I do…?”. It’s what they call the mini-Turing test.
Geoffrey Hinton transformed the field of artificial intelligence a decade ago with deep learning. Now he’s working on a new imaginary system.
Geoffrey Hinton has a premonition about the next step in AI https://t.co/8zBMGCroKN— Church and State (@ChurchAndStateN) June 21, 2021
They explain that computers haven’t advanced much in this area for the last 30 years. If computers had a model of reality they’d be able to pass the mini-Turing test. However, Fjelland criticizes this notion because no one can have a model of reality. We can only model “simplified aspects of reality.”
“The real problem is that computers are not in the world, because they are not embodied.”
For this, machines would need to be able to interact with the world and acquire contextual knowledge the way we do. The conclusion is clear for Fjelland: “We will not be able to realize AGI because computers are not in the world.”
Those that remain optimistic argue that some twists and tweaks will keep DL as the primary force paving the road towards an intelligent future for AI. Geoffrey Hinton suggests, in a slightly more reforming way, that we may have to “throw it all away and start again.” To advance to the next frontiers in the search of machine intelligence we’ll need to look into the human brain, he says.
However, the most pessimistic argument comes from Fjelland, who concludes by saying that “the belief that AGI can be realized is harmful. If the power of technology is overestimated and human skills are underestimated, the result will in many cases be that we replace something that works well with something that is inferior.”
Whatever the future holds for AI, we’ll need to convince governments, institutions, venture capitalists, and other funding agents or we may see yet another AI winter.
Reprinted with permission from the author.
Alberto Romero is a Spanish engineer, neuroscientist, and writer. Follow him at LinkedIn, Medium and Twitter.
Geoff Hinton speaks about his latest research and the future of AI
Superintelligence: Science or Fiction? | Elon Musk & Other Great Minds
Be sure to ‘like’ us on Facebook