The Future of Intelligence

    Excerpt from On Intelligence, by Jeff Hawkins with Sandra Blakeslee (Times Books, 2004). Reprinted with permission from the authors.

    Chapter 8: The Future of Intelligence

    It’s hard to predict the ultimate uses of a new technology. As we’ve seen throughout this book, brains make predictions by analogy to the past. So our natural inclination is to imagine that a new technology will be used to do the same kinds of things as a previous technology. We imagine using a new tool to do something familiar, only faster, more efficiently, or more cheaply.

    Examples are abundant. People called the railroad the “iron horse” and the automobile the “horseless carriage.” For decades the telephone was viewed in the context of the telegraph, something that should be used only to communicate important news or emergencies; it wasn’t until the 1920s that people started using it casually. Photography was at first used as a new form of portraiture. And motion pictures were conceptualized as a variation on stage plays, which is why movie theaters had retracting curtains over the screens for much of the twentieth century.

    Yet the ultimate uses of a new technology are often unexpected and more far-reaching than our imaginations can at first grasp. The telephone has evolved into a wireless voice and data communications network permitting any two people on the planet to communicate with each other, no matter where they are, via voice, text, and images. The transistor was invented by Bell Labs in 1947. It was instantly clear to people that the device was a breakthrough, but the initial applications were just improvements on old applications: transistors replaced vacuum tubes. This led to smaller and more reliable radios and computers, which was important and exciting in its day, but the main differences were the size and reliability of the machines. The transistor’s most revolutionary applications weren’t discovered until later. A period of gradual innovation was necessary before anyone could conceive of the integrated circuit, the microprocessor, the digital signal processor, or the memory chip. The microprocessor, likewise, was first developed, in 1970, with desktop calculators in mind. Again, the first applications were just replacements of existing technologies. The electronic calculator was a replacement for the mechanical desktop calculator. Microprocessors were also clear candidates to replace the solenoids that were then used in certain kinds of industrial control, such as switching traffic lights. However, it was years before the true power of the microprocessor began to be manifest. No one at the time could foresee the modern personal computer, the cell phone, the Internet, the Global Positioning System, or any other piece of today’s bread-and-butter information technology.

    By the same token, we would be foolish to think we can predict the revolutionary applications of brainlike memory systems. I fully expect these intelligent machines to improve life in all sorts of ways. We can be sure of it. But predicting the future of technology more than a few years out is impossible. To appreciate this you need only read some of the absurd prognostications futurists have confidently made over the years. In the 1950s, it was predicted that by the year 2000 we’d all have atomic reactors in our basements and take our vacations on the moon. But as long as we keep these cautionary tales in mind, there’s a lot to be gained by speculating about what intelligent machines will be like. At a minimum, there are certain broad and useful conclusions we can draw about the future.

    The questions are intriguing ones. Can we build intelligent machines, and, if so, what will they look like? Will they be closer to the humanlike robots seen in popular fiction, the black or beige box of a personal computer, or something else? How will they be used? Is this a dangerous technology that can harm us or threaten our personal liberties? What are the obvious applications for intelligent machines, and is there any way we can know what the fantastic applications will be? What will the ultimate impact of intelligent machines be on our lives?

    Can We Build Intelligent Machines?

    Yes, we can build intelligent machines, but they may not be what you expect. Although it may seem like the obvious thing to do, I don’t believe we will build intelligent machines that act like humans, or even interact with us in humanlike ways.

    One popular notion of intelligent machines comes to us from movies and books—they are the lovable, evil, or occasionally bumbling humanoid robots that converse with us about feelings, ideas, and events, and play a role in endless science-fiction plots. A century of science fiction has trained people to view robots and androids as an inevitable and desirable part of our future. Generations have grown up with images of Robbie the Robot from Forbidden Planet, R2D2 and C3PO from Star Wars, and Lieutenant Commander Data from Star Trek. Even HAL in the movie 2001: A Space Odyssey, although not possessing a body, was very humanlike, designed to be as much a companion as a programmed copilot for the humans on their long space journey. Limited-application robots—things like smart cars, autonomous minisubmarines to explore the deep ocean, and self-guided vacuum cleaners or lawn mowers—are feasible and may well grow more common someday. But androids and robots like Commander Data and C3PO are going to remain fictional for a very long time. There are a couple of reasons for this.

    First, the human mind is created not only by the neocortex but also by the emotional systems of the old brain and by the complexity of the human body. To be human you need all of your biological machinery, not just a cortex. To converse like a human on all matters (to pass the Turing Test) would require an intelligent machine to have most of the experiences and emotions of a real human, and to live a humanlike life. Intelligent machines will have the equivalent of a cortex and a set of senses, but the rest is optional. It might be entertaining to watch an intelligent machine shuffle around in a humanlike body, but it will not have a mind that is remotely humanlike unless we imbue it with humanlike emotional systems and humanlike experiences. That would be extremely difficult and, it seems to me, quite pointless.

    Second, given the cost and effort that would be necessary to build and maintain humanoid robots, it is difficult to see how they could be practical. A robot butler would be more expensive and less helpful than a human assistant. While the robot might be “intelligent,” it would not have the kind of rapport and easy understanding a human assistant would have by virtue of being a fellow human being.

    Both the steam engine and the digital computer evoked robotic visions, which never came to fruition. Similarly, when we think of building intelligent machines, many people find it natural to imagine humanlike robots once again, but it is unlikely to happen. Robots are a concept born of the industrial revolution and refined by fiction. We should not look to them for inspiration in developing genuinely intelligent machines.

    So what will intelligent machines look like if not walking talking robots? Evolution discovered that if it attached a hierarchical memory system to our senses, the memory would model the world and predict the future. Borrowing from nature, we should build intelligent machines along the same lines. Here, then, is the recipe for building intelligent machines. Start with a set of senses to extract patterns from the world. Our intelligent machine may have a set of senses that differ from a human’s, and may even “exist” in a world unlike our own (more on this later). So don’t assume that it has to have a set of eyeballs and a pair of ears. Next, attach to these senses a hierarchical memory system that works on the same principles as the cortex. We will then have to train the memory system much as we teach children. Over repetitive training sessions, our intelligent machine will build a model of its world as seen through its senses. There will be no need or opportunity for anyone to program in the rules of the world, databases, facts, or any of the high-level concepts that are the bane of artificial intelligence. The intelligent machine must learn via observation of its world, including input from an instructor when necessary. Once our intelligent machine has created a model of its world, it can then see analogies to past experiences, make predictions of future events, propose solutions to new problems, and make this knowledge available to us.

    Physically, our intelligent machine might be built into planes or cars, or sit stoically on a rack in a computer room. Unlike humans, whose brains must accompany their bodies, the memory system of an intelligent machine might be located remotely from its sensors (and “body,” if it had one). For example, an intelligent security system might have sensors located throughout a factory or a town, but the hierarchical memory system attached to those sensors could be locked in a basement of one building. Therefore, the physical embodiment of an intelligent machine could take many forms.

    There is no reason that an intelligent machine should look, act, or sense like a human. What makes it intelligent is that it can understand and interact with its world via a hierarchical memory model and can think about its world in a way analogous to how you and I think about our world. As we will see, its thoughts and actions might be completely different from anything a human does, yet it will still be intelligent. Intelligence is measured by the predictive ability of a hierarchical memory, not by humanlike behavior.

    * * * * *

    Let’s turn our attention to the largest technical challenge we will face when building intelligent machines, creating the memory. To build intelligent machines, we will need to construct large memory systems that are hierarchically organized and that work like the cortex. We will confront challenges with capacity and connectivity.

    Capacity is the first issue. Let’s say the cortex has 32 trillion synapses. If we represented each synapse using only two bits (giving us four possible values per synapse) and each byte has eight bits (so one byte could represent four synapses), then we would need roughly 8 trillion bytes of memory. A hard drive on a personal computer today has 100 billion bytes, so we would need about eighty of today’s hard drives to have the same amount of memory as a human cortex. (Don’t worry about the exact numbers because they are all rough guesses.) The point is, this amount of memory is definitely buildable in the lab. We aren’t off by a factor of a thousand, but it is also not the kind of machine you could put in your pocket or build into your toaster. What is important is that the amount of memory required is not out of the question, whereas only ten years ago it would have been. Helping us is the fact that we don’t have to re-create an entire human cortex. Much less may suffice for many applications.

    Our intelligent machines will need lots of memory. We will probably start building them using hard drives or optical disks, but eventually we will want to build them out of silicon as well. Silicon chips are small, low power, and rugged. And it is only a matter of a time before silicon memory chips could be made with enough capacity to build intelligent machines. In fact, there is an advantage intelligent memory has over conventional computer memory. The economics of the semiconductor industry is based on the percentage of chips that have errors. For many chips even a single error will make the chip useless. The percentage of good chips is called the yield. It determines whether a particular chip design can be manufactured and sold at a profit. Because the chance of an error increases as the size of the chip does, most chips today are no bigger than a small postage stamp. The industry has boosted the amount of memory on a single chip not by making the chip larger but, mostly, by making the individual features on the chip smaller.

    But intelligent memory chips will be inherently tolerant of faults. Remember, no single component of your brain holds any indispensable item of data. Your brain loses thousands of neurons each day, yet your mental capacity decays at only a slow pace throughout your adult life. Intelligent memory chips will work on the same principles as cortex, so even if a percentage of the memory elements come out defective, the chip will still be useful and commercially viable. Most likely, the inherent tolerance to errors of brainlike memory will allow designers to build chips that are significantly larger and denser than today’s computer memory chips. The result is that we may be able to put a brain in silicon sooner than current trends might indicate.

    The second problem we have to overcome is connectivity. Real brains have large amounts of subcortical white matter. As we noted earlier, the white matter is made up of the millions of axons streaming this way and that just beneath the thin cortical sheet, connecting the different regions of the cortical hierarchy with each other. An individual cell in the cortex may connect to five or ten thousand other cells. This kind of massively parallel wiring is difficult or impossible to implement using traditional silicon manufacturing techniques. Silicon chips are made by depositing a few layers of metal, each separated by a layer of insulation. (This process has nothing to do with the layers in the cortex.) The layers of metal contain the “wires” of the chip, and because wires can’t cross within a layer, the total number of wired connections is limited. This is not going to work for brainlike memory systems, where millions of connections are necessary. Silicon chips and white matter are not very compatible.

    A lot of engineering and experimentation will be necessary to solve this problem, but we know the basics of how it will be solved. Electrical wires send signals much more quickly than the axons of neurons. A single wire on a chip can be shared, and therefore used for many different connections, whereas in the brain each axon belongs to just one neuron.

    A real-world example is the telephone system. If we ran a line from every telephone to every other telephone, the surface of the globe would be buried under a jungle of copper wire. What we do instead is have all the telephones share a relatively small number of high-capacity lines. This method works as long as the capacity of each line is far greater than the capacity required to transmit a single conversation. The telephone system meets this requirement: a single fiber optic cable can carry a million conversations at once.

    Real brains have dedicated axons between all cells that talk to each other, but we can build intelligent machines to be more like the telephone system, sharing connections. Believe it or not, some scientists have been thinking about how to solve the brain chip connectivity problem for many years. Even though the operation of the cortex remained a mystery, researchers knew that we would someday unravel the puzzle, and then we would have to face the issue of connectivity. We don’t need to review the different approaches here. Suffice it to say that connectivity might be the biggest technical obstacle we face in building intelligent machines but we should be able to handle it.

    Once the technological challenges are met, there are no fundamental problems that prevent us from building genuinely intelligent systems. Yes, there are lots of issues that will need to be addressed to make these systems small, low cost, and low power, but nothing is standing in our way. It took fifty years to go from roomsize computers to ones that fit in your pocket. But because we are starting from an advanced technological position, the same transition for intelligent machines should go much faster.

    Excerpted from On Intelligence by Jeff Hawkins with Sandra Blakeslee. Copyright © Jeff Hawkins and Sandra Blakeslee, 2004. All rights reserved.

    Jeff Hawkins is one of the most successful and highly regarded computer architects and entrepreneurs in Silicon Valley. He founded Palm Computing and Handspring, and created the Redwood Neuroscience Institute to promote research on memory and cognition. The institute, renamed the Redwood Center for Theoretical Neuroscience, is now located at University of California, Berkeley. In 2005 he co-founded Numenta, a startup company building a technology based on neocortical theory. It is his hope that Numenta will play a catalytic role in the emerging field of machine intelligence. Hawkins earned his B.S. in electrical engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003. Also a member of the scientific board of Cold Spring Harbor Laboratories, he lives in northern California.

    Sandra Blakeslee has been writing about science and medicine for The New York Times for more than thirty years and is the co-author of Phantoms in the Brain by V. S. Ramachandran and of Judith Wallerstein’s bestselling books on psychology and marriage. Her most recent book, Sleights of Mind, explains how magicians are able to hack into the human brain with ease. She lives in Santa Fe, New Mexico.

    Jeff Hawkins: On Open Intelligence

    Be sure to ‘like’ us on Facebook

    2 COMMENTS

    1. An excellent article On Intelligence by Jeff Hawkins!! I'm quite surprised reading this article. Information allocated here is deeply meaningful and I believe nowadays technology is giving us various blessing through electronic devices. Now we create such technology, therefore all credit goes to us because we are the technology itself. Thanks.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here