How Machine Learning Will Enable Technologies That Anticipate What The Brain Thinks

By Prof. Gabriel A. Silva | 11 May 2021

The intersection of between computers, neurotechnologies, and the human brain. (Credit:

This past week, Elon Musk’s new venture Neuralink made headlines by showing a video of a monkey playing Pong with his mind, controlled by a surgically implanted wireless device that can directly read brain signals and interpret its intended commands. The technologies that enable such communication between a computer and the brain are called brain-machine interfaces (BMIs).

Brain-machine interfaces — or brain-computer interfaces, the terms are used interchangeably — are technologies designed to directly ‘‘plug’ into the nervous system: the brain, retinas in the eyes (which are actually a part of the brain itself), spinal cord, or peripheral nervous system. The Neuralink example and other similar technologies are designed to read and decode neural signals from individual neurons in selected parts of the brain in an attempt to understand the brain’s outputs. Instead of the outputs going to the arm of a monkey or human controlling a joy stick to play Pong or some other video game, they go to a computer which plays the game instead.

How do they achieve this? Specially designed electrodes are surgically implanted into a target region of the brain where the neural signals need to be recorded. Those signals are then decoded and the intent of the brain interpreted by mathematical models and computer algorithms that take advantage of what is known about how the brain works. Eventually, the commands interpreted by the computer are used to execute desired functions or tasks, such as controlling a robotic arm, generating synthesized speech, or playing video games.

Because surgically implanted BMIs are highly invasive, their use is restricted to restoring clinical function in patients suffering debilitating neurological disorders, in particular motor disorders such as paralysis following spinal cord injury or stroke, locked-in syndrome, and amyotrophic lateral sclerosis (ALS). The impact these technologies can have on the quality of life of these patients and their families cannot be overstated.

Until relatively recently, surgically implantable BMIs necessitated wired connections between the brain and the computer the wires were plugged into. But this has a number of serious disadvantages and risks. The electrodes can move in unintended ways as mechanical forces are exerted on the wires, and it can lead to a significant risk of infection or other types of injury. More recently though, BMIs implanted in the brain have gone wireless. The entire device is self contained within the skull and brain with no external wires protruding out. They communicate with external computers using various ‘‘through the air’ protocols and algorithms in a similar way your Bluetooth and WiFi devices work.

In contrast, non-invasive BMIs are very different from surgically implanted invasive BMIs. Non-invasive BMIs rely on electroencephalography (EEG) and related methods to read and interpret brain waves. They do not require surgically implanted electrodes, but rather external electrodes integrated into form factors a user can wear and take off as needed — like a cap. The video game industry and virtual and augmented reality worlds have a strong interest in non-invasive BMIs, for example. These market segments are one of the main economic drivers for research in this area. Unfortunately though, the resolution and quality of measured brain signals these non-invasive methods provide are generally not sufficient for the needs demanded by clinical applications.

The earliest work using EEG to measure and attempt to make sense of brain signals is over 100 years old, dating back to the 1920’s. And the engineering accomplishments behind the press Neuralink has been receiving lately is grounded in years of pioneering work by a number of research groups from around the world. In 2012 researchers from Brown University in Providence, Rhode Island, along with colleagues at Massachusetts General Hospital and Harvard University in Boston, and the Institute of Robotics and Mechatronics in Germany, showed that a wired BMI could successfully be used in human patients with tetraplegia — a severe form of paralysis in all four limbs — to control a robotic arm to drink, and to control a computer screen to read email.

This effort is part of the BrainGate project, a collaborative effort between Brown University, Case Western Reserve University, Massachusetts General Hospital, Stanford University, and the Department of Veteran Affairs. In their most recent work, published just a few days ago, the team introduced a wireless version in humans of a previous wired prototype. The patients were able to surf the web and use other apps on a commercially available tablet computer.

In some of the earliest work in the field, researchers at Duke University in 2014 were able to wirelessly record from 1800 distinct neurons in the brains of freely moving monkeys for nearly five years. And in 2016 the same group showed that monkeys implanted with their wireless BMI could use the system to continuously manipulate and drive a wheel chair.

And now, converging with advances taking place in machine learning, BMIs are on the verge of entirely new capabilities.

The Problem with One-Size-Fits-All BMIs

There is a tremendous amount of engineering that goes into developing BMIs. State of the art micro- and nano-fabrication, mathematical and computer modeling, extensive neurobiological experiments, pre-clinical testing in animal models, and clinical testing in humans all need to take place. Because of the up-front complexity and effort required in building these devices, once it is built and tested the design and engineering details are pretty much fixed. This means that the functionality of the BMI, what it can do and how it operates, is by necessity also fixed and limited to the constraints imposed by its design specifications.

The problem, however, is that the requirements and needs of different patients will vary to significant degrees from individual to individual. Even for patients diagnosed with the same disorder, how the parameters of a BMI are fine tuned may need to be different in order to achieve optimized performance tailored to the individual. For example, how many neurons to record from and how decoding algorithms should interpret changes in recorded signals. And equally, if not more significant, the needs of the individual patient themselves will change and evolve over time as disease progresses or even over time as a normal part of aging.

Even more challenging, how a BMI needs to interact with the brain may vary on relatively short and highly dynamic, i.e. changing, time scales within the course of minutes or hours. For example, depending on the physical nature of an activity a patient is engaged in, or the degree of an intellectual demand associated with a specific task, the BMI may need to quickly adapt. What the brain needs to do to change the channel on the TV is very different than what it needs to do if it is playing a difficult video game, for example.

Even the time of day and cognitive state of the individual may have an effect on the demands put on a BMI. Are you trying to focus on a task late in the evening when you are tired? Or is it the morning and you are fresh and ready to go?

In short: a one-size-fits-all BMI cannot be truly optimized to the needs of an individual patient after it is surgically implanted.

The Opportunity: Machine Learning + BMIs

Some BMI technologies already incorporate physiological feedback or patient input to adjust their outputs and functions. But in general, human interaction is needed, such as subjective or perceptual feedback from the patient, or manually adjusting parameters by a doctor. The integration of state of the art machine learning to achieve optimized near real-time functionality in BMIs — in other words, adaptive and autonomous ‘smart’ BMIs — is still in its earliest stages.

With the integration of machine learning, BMIs may one day be able to learn and anticipate the contextual needs of situations a patient finds themselves in. Such BMIs will be able to adjust their outputs and functions in near real-time to accommodate changing cognitive and physical demands. Or be able to apply what they learn in one scenario and under a specific set of conditions to the needs of the patient under a different set of conditions in a new scenario. All without necessitating interpretation or involvement from a human.

For sure, there are many open questions and engineering challenges to be solved before this becomes really possible. For example, demands on computing power and where on the hardware in the patient or cloud any machine learning will take place have to be considered. This is particularly serious in this case because what happens if the BMI needs an internet connection to function properly, but the patient finds themselves in an internet dead zone? Other considerations include the need for further optimized algorithm development, and the need for specialized hardware designed to work specifically with advanced algorithms. And the list goes on.

Yet, despite the challenges, real progress is being made. In one study researchers demonstrated a proof of concept wireless BMI system that took advantage of state of the art flexible electronics and convolution neural networks, one of the most successful approaches to machine learning, in order to allow implanted patients to control a wheelchair.

And in another study, researchers used reinforcement learning, another type of machine learning, to optimize the calibration of a BMI while at the same time ‘transferring’ what the BMI learned in one scenario to exploring new knowledge (a form of learning referred to as transfer learning – because information is transferred to a new situation). There are even textbooks now aimed at machine learning and artificial intelligence applications to BMIs.

In the end, one day, future patients that need BMIs, as well as their families and loved ones, will be the ultimate beneficiaries of these technologies and the confluence of efforts by thousands of scientists, engineers, and doctors. And that is a hope worth collectively pursuing.

This article was originally published on You can check out this and other pieces written by the author on Forbes here.

Reprinted with permission from the author.

Gabriel A. Silva is a theoretical and computational neuroscientist and bioengineer, Professor of Bioengineering at the Jacobs School of Engineering and Professor of Neurosciences in the School of Medicine at the University of California San Diego (UCSD). He is also the Founding Director of the Center for Engineered Natural Intelligence (CENI) at UCSD, and is a Jacobs Faculty Endowed Scholar in Engineering. He holds additional appointments in the Department of NanoEngineering, the BioCircuits Institute, the Neurosciences Graduate Program, Computational Neurobiology Program, and Institute for Neural Computation.

Brain Gate – Breakthrough in Brain-To-Computer Interfaces

Brain-Computer Interface – Mysteries of the Brain

Elon Musk’s Plan To Merge Humans With A.I.

Will brain-computer interfaces transform human lives? | Inside Story

Be sure to ‘like’ us on Facebook


Please enter your comment!
Please enter your name here