By Prof. Gabriel A. Silva | 4 October 2021
The ‘Thinkers and Innovators’ series explores the science and philosophy of the brain and mind with some of the world’s foremost forward thinking experts. It also explores technologies used for studying and interfacing with the brain, as well as technologies motivated by the brain, such as machine learning and artificial intelligence. (Disclaimer: Professor Sejnowski is a colleague of the author’s at the University of California San Diego.)
Terrence J. Sejnowski is the Francis Crick Professor at the Salk Institute for Biological Studies, where he is the Director of the Crick-Jacobs Center for Theoretical and Computational Biology, and Professor of Biological Sciences (Division of Neurobiology) at the University of California San Diego. He is also Co-Director of the Institute for Neural Computation at UCSD. He is a pioneer in the areas of computational neuroscience and artificial neural networks. Among a number of major awards and honors, Professor Sejnowski is a member of all three United States National Academies (the National Academy of Engineering, the National Academy of Sciences, and the Institute of Medicine). He was also a member of the Advisory Committee to the Director of the National Institutes of Health (NIH) for the Brain Research through Application of Innovative Neurotechnologies (BRAIN) Initiative, announced by President Obama on April 2nd 2013.
Let’s start with the BRAIN Initiative. The aim of that effort was to revolutionize how scientists measure, study, and interface with the brain, with most of the focus to date having been on the development of ground-breaking neurotechnologies capable of performing experiments that exceed any technological capabilities that have come before them. Do you think it has served its purpose so far?
That’s a great place to start because it was a miracle that the BRAIN Initiative happened. The 1990’s was the decade of the brain. But it failed to make progress because attempts were focused on neurological and mental disorders, which could be explained to the public. But NIH is already putting $5 billion a year into brain disorders. So to be truly disruptive, the decision was made that instead of focusing on disease, the BRAIN Initiative would focus on techniques, tools, and methods that could help accelerate research into studying diseases.
I was on the advisory committee to NIH which recommended that for the first five years, the resources should go to teams neuroscientist and others, such as engineers, physicists and mathematicians, working as a team, on experiments and data analysis. NIH changed its internal grant review structure to accommodate this. And it was spectacularly successful in bringing these different researchers together, because there was incentive now for the engineer to help the biologist, and vice versa. Over the last six or seven years, they have formed really strong, collaborative working relationships. A lot of students have been trained, and a lot of really good research was done. The decisions were made based on our report, BRAIN 2025, and it raised neuroscience to a new level.
The Brain Research through Advancing Innovative Neurotechnologies
(BRAIN) Initiative 2.0:https://t.co/GuLkCPGazf
BRAIN 2025 Report (2014):https://t.co/uUks4zF3B6
— Very 0wn (@Very_0wn) February 24, 2022
But where are we now? You know, we thought we were being ambitious. Well, they accomplished in five years what we thought would take ten. For example, one goal was to record from a million neurons at the same time. Well, we passed that milestone this year. A million neurons! We weren’t sure it would ever happen, but here we go.
So given the pace of progress, where would you want to see things go over the next five years or so?
With these new tools and techniques, it’s already become clear that our conceptual framework for brain function, which was built by recording from one neuron at a time, is flawed. Experiments have revealed much more complex patterns of activity. And I’ll tell you why. It’s very difficult to understand the dynamics of the brain by recording from one neuron at a time. You have no idea what’s going on with other neurons. Here’s a good example: We now know that patterns of brain activity that people have been talking about for decades, recorded at the whole brain scale using methods like the electroencephalogram (EEG) or from single neurons, were assumed to be synchronous. But that was wrong, In fact, they’re all actually traveling waves. And that puts it into a completely different conceptual framework, because traveling waves spread information out over space and time. It’s a space-time code, and we just don’t have a good conceptual framework yet that explains it. This could not have been discovered by recording from single neurons.
Is this all so new that we just don’t have a good mathematical framework to understand the new data coming from the brain in an appropriate context?
Exactly. When this happens in science it is an exciting time to be around. This happened in physics at the beginning of the 20th century. You remember, classical mechanics was supposed to have everything already solved. But new experiments did not agree with classical physics. Something really strange was going on. It just didn’t make any sense. So a new conceptual framework had to be created to replace the classical view of the world. It happened first with relativity and then again with quantum mechanics.
So how do you see physics, math and engineering interacting with computational and theoretical neuroscience moving forward? And what about its relationship with machine learning and AI?
I’ve written a whole book about this, The Deep Learning Revolution. AI has been transformed over the last decade by Deep Learning. There are new companies building special purpose machine learning hardware that compliments the massive data being collected everywhere. And where did that revolution come from? It was inspired by the massively parallel architecture of brains. For the first time, AI and neuroscience are speaking the same language. There’s a conceptual and mathematical structure emerging that could serve as a conceptual umbrella for both of these groups.
Inside Big Data – Book Review: Deep Learning Revolution by Terrence J. Sejnowski https://t.co/Ps4E8Kee3k
— Bernard Marr (@BernardMarr) March 5, 2019
This revolution happened overnight but it took decades to mature. I was present when the Neural Information Processing System meeting first got started in the 1980’s. We brought together mathematicians, neuroscientists, cognitive scientists, as well as researchers in computer vision and speech recognition. It was amazing to have a dozen tribes getting together. These were not the establishment people in these fields. They were the outliers. And who were they? Well, these were people who were trying to solve really difficult problems and the tools and techniques that were available in their field were not adequate, like in speech recognition. It was hard because it was a very high dimensional problem. Traditional algorithms failed. A huge amount of data was needed to make progress. The hope was that neural networks might be able to help out.
With lots of data, you have to have analysis tools, like machine learning tools to make progress. Neural networks with many parameters are complex functions that can represent those complexities. And now we’re living in an era where for the first time, we have a mathematical framework that is getting better every day at handling the complexity of the world. Nature was in that business a long time ago and evolved brains to help us survive.
Back when I was starting I was befuddled because symbol processing was the only game in town at AI meetings. They thought language was just a way of manipulating symbols. But it seemed to me that that can’t be right because for me, language is about meaning. An utterance has meaning. And sentences have higher meaning. It’s the relationships between symbols that neural networks extract through learning. Nature solved these problems in vision and speech millions of years ago. In fact, the only existence proof that you can solve them is the fact that we walk around and can understand each other. So why not look inside the brain to figure out why? There’s got to be something that we can learn from the brain. This seemed obvious to me, but I was an outlier.
I finally understood what was going on when Geoff Hinton got a job at Carnegie Mellon University. Allen Newell was there at the original 1956 meeting that is traditionally thought of as the birth of artificial intelligence. He had written a program that could solve theorems. Wow. If AI could prove theorems, it can solve anything, right? So I asked Newell did it ever occur to you to look at how the brain works? Because computer scientists have until recently ignored, in any serious way, how the brain actually works. Here’s what he said: ‘No, no, that’s not true. We were very interested in what we could learn from the brain. But, not very much was then known about brains so we couldn’t do much about it!’ The goal of AI back then was to write a program that had the same functions as brains, but they vastly underestimated the amount of computation that was needed.
— Brian Ahier (@ahier) November 12, 2020
One last question. Neural computation and simulations will continue to play a big role in neuroscience discovery, in helping scientists understand how the algorithms of the brain come together. But what about theory? What role does theory have in making sense of the brain?
Here’s my guess for how it’s going to play out. The mathematicians have now jumped into the game. And they’re figuring out why Deep Learning works. Both in the brain and in machine learning. They’re creating new branches of mathematics. Theories about high-dimensional spaces. AI today has reached a trillion dimensions. The geometry of high-dimensional spaces and working out the complexity of the geometry are at the foundation of what’s going on in the brain. I think that there’s going to be a common theory for both AI and the brain. It’s a lot easier to make a theory and test it for a Deep Learning network than for the brain, because deep learning networks are not a black box. They’re completely open. You have access to every unit, every activity pattern, every connection. You have complete access. Whereas the brain is a black box in that sense. So if we can figure it out for the open box, and I think we can, then that could inspire new theories for the brain.
Reprinted with permission from the author.
Gabriel A. Silva is a theoretical and computational neuroscientist and bioengineer, Professor of Bioengineering at the Jacobs School of Engineering and Professor of Neurosciences in the School of Medicine at the University of California San Diego (UCSD). He is also the Founding Director of the Center for Engineered Natural Intelligence (CENI) at UCSD, and is a Jacobs Faculty Endowed Scholar in Engineering. He holds additional appointments in the Department of NanoEngineering, the BioCircuits Institute, the Neurosciences Graduate Program, Computational Neurobiology Program, and Institute for Neural Computation.
Terry Sejnowski: Computational Neuroscience
The BRAIN Initiative: Understanding the Mysteries of the Brain
This Canadian Genius Created Modern AI
AlphaGo – The Movie | Full award-winning documentary