‘Smart’ Brain-Machine Interfaces That Adapt to Your Needs and Intent

Machine Learning meets Neural Engineering

By Prof. Gabriel A. Silva | 14 October 2020

(Credit: Dreamstime.com)

I’m going to confess something I’ve never told anyone. The inspiration for much of my current research at the intersection between neuroscience, mathematics, and machine learning was inspired by a single beautiful scene in the movie Ex Machina. There is a scene where the movie’s two protagonists Nathan and Caleb are in the lab where Nathan built Ava, a humanoid artificial intelligence (AI), where they share an exchange about how Nathan engineered Ava’s brain. “Structured gel. I had to get away from circuitry. I needed something that could arrange and re-arrange at a molecular level, but keep its form when required. Holding for memories, shifting for thoughts.” Of course, science fiction has a long history of motivating and inspiring what eventually becomes real science. And nowhere is this more true than when it comes to the brain (okay, other than maybe space), and in particular when it comes to brain-machine interfaces (BMI’s) becoming AI-enabled.

A confluence of technological capabilities is creating an opportunity and setting the conditions for machine learning and AI to enable “smart” brain machine interfaces. Devices and technologies that are designed to adapt and respond to the user’s brain, predicting needs and intent in a context dependent way. Learning and adapting to changing functional requirements. Surgically implantable BMI’s have important applications to personalized clinical neuroscience. And non-invasive BMI technologies can be used in both medicine but also gaming, including augmented and virtual reality.

What are brain machine interfaces?

Brain machine and brain computer interfaces (terms often used interchangeably) are technologies designed to interface and communicate with the brain and spinal cord. (In science fiction, think ‘The Matrix’ and how they could plug directly into their brains through those ports at the base of the skull.) One important class of such devices are surgically implantable neural prostheses. These devices are intended to restore some degree of clinical function to the patients that receive them. They can have immeasurable impacts on the quality of life of their recipients and their families. For example, closed loop deep brain stimulation for treating motor disorders such as Parkinson’s Disease that can stimulate the brain in a way that is responsive to the brain signals of the patient and how the brain is reacting to the stimulation. Another example are efforts aimed at restoring vision through retinal neural prostheses. The retinas in your eyes, which are actually a part of the brain itself, are photosensitive neural tissue that detect light and convert it into a set of signals the brain can understand and interpret. These examples don’t even begin to scratch the surface. This is a very large and active field of research.

Not all BMI’s are invasive or or intended to be surgically implanted though. There is a growing class of non-invasive brain machine interface technologies not meant for clinical use, but rather, intended to augment the user experience and control interface for gaming, including augmented and virtual reality applications. Typically, these technologies rely on signals produced by the brain that can be detected and measured outside the body, such as electroencephalography (EEG). Despite the difference in focus compared to technologies aimed at treating and restoring clinical function and quality of life in patients, the gaming sector shouldn’t be dismissed. It is a huge economic driver to advancing BMI technologies, on par with healthcare applications. And it can provide leveraging knowledge and resources to the benefit of clinically related research and treatments.

How machine learning and AI can enable ‘smart’ BMI

Machine learning and AI can provide opportunities to create “smart” BMI that contextually learn and adapt to changing functional requirements and demands. Ideally, interpreting brain signals to execute output commands of different kinds in a contextual real time way. Even potentially anticipating and responding to intent.

These kinds of technologies have the potential to produce individualized experiences in gaming and augmented and virtual reality. And they offer an opportunity to tailor the response of BMI’s to the individual evolving needs of patients. I can’t overstate the importance of this last point. Up until now, as sophisticated as the engineering in the existing state of the art has been, BMI’s have been rather static in their designs and functionality. Any ability of the devices to respond to the brain are necessarily fixed during the design phase. In the case of a neural prosthesis, once implanted there is no autonomous adaptation or learning on the part of the device. No ‘tweaking’ of its outputs to optimize how it interacts with the brain. To be fair, some devices do allow for tweaking after being implanted through the periodic intervention of a human (think about a patient that comes into the clinic where a doctor reads outputs and then adjusts some knobs). But it has not been possible to do this in real time or in any autonomously responsive way by the BMI’s themselves as they continuously learn about the user’s brain. It has been a ‘one size fits all’ kind of situation.

But now, continued advances in neural engineering, nanotechnology, and miniaturization can combine with the amazing progress in machine learning and AI. The result are BMI’s that have the potential to continuously learn from the brain, adapt, and in turn optimize how they interact with the brain. Engineering that responds to the brain, as opposed to the brain itself reacting to the engineering.

What advantages does machine learning and AI offer BMI’s? What exactly are machine learning algorithms learning, and how can they use that information to adapt in a meaningful way? What these algorithms can learn is information provided by feedback and telemetry from the hardware measured from the brain. This could be information about the current state of output settings on the device, or any number of external signals measured by sensors in the BMI. For example, physiological measurements in response to stimulation, or feedback from other algorithms external to the BMI-machine learning system, such as haptic or computer vision feedback from other sensors.

Challenges for success

While the potential impact of AI enabled ‘smart’ BMI’s is huge, to be sure there are a number of significant obstacles that will need to be addressed before any such potential is realized.

In the neural engineering itself, the development of hardware to measure (record brain signals — critical to all BMI) and stimulate the brain (in the case of neural prosthesis), has necessarily focused on the core chemistry, physics, and engineering required to build such devices in the first place. This is understandable of course. But beyond the challenges of designing and building BMI’s, there are other engineering and data science challenges that will need to be addressed. In particular, for high density devices with hundreds and eventually many thousands or tens of thousands of recording or stimulation components, exactly how will data from these devices be accessed and used? For example, if information about spatial micro anatomy or the connectivity of the networks of neurons is important to algorithms or data analyses critical to the device making decisions, it isn’t sufficient to just be able to record from thousands of neurons, there has to be a systematic approach to getting that data out in an organized way in order for the algorithms to properly interpret and make use of it.

Another critical challenge has to do with the fact that we do not, of course, completely understand the brain itself. Questions about how the brain represents and encodes information (the neural code), and how the neurobiology implements its own internal algorithms to deal with the neural code are at the forefront of neuroscience research and our knowledge of the brain. But the fact there is still so much we don’t understand about the brain makes it difficult or not yet completely possible to develop meaningful machine learning algorithms for technologies intended to interface with the brain.

Finally, ethical considerations associated with all this work and how to properly address them must remain at the forefront of continued progress. But here also, similar to the recognition of ethical concerns given the pace of progress in machine learning and AI, scientists, bioethicists and other experts are heavily engaged in ethical and moral implications of BMI’s, as well as the additional considerations that AI brings with it.

These and other challenges not withstanding, the idea that a locked in patient may someday be able to communicate and move on their own using an AI-enabled BMI that adapts to their needs and intent is a fantastic pursuit. Or technologies that provide an immersive environment for autistic individuals to interact with the world in a personalized way that adapts in real time to how they feel — putting the individual in control. The research necessary to see these kinds of applications come to fruition across the various relevant scientific and technological disciplines is beginning to converge. All of a sudden it isn’t science fiction motivating science. It is science enabling what was once pure imagination.

Reprinted with permission from the author.

Gabriel A. Silva is a theoretical and computational neuroscientist and bioengineer, Professor of Bioengineering at the Jacobs School of Engineering and Professor of Neurosciences in the School of Medicine at the University of California San Diego (UCSD). He is also the Founding Director of the Center for Engineered Natural Intelligence (CENI) at UCSD, and is a Jacobs Faculty Endowed Scholar in Engineering. He holds additional appointments in the Department of NanoEngineering, the BioCircuits Institute, the Neurosciences Graduate Program, Computational Neurobiology Program, and Institute for Neural Computation.

Brain Gate – Breakthrough in Brain-To-Computer Interfaces

Brain-Computer Interface – Mysteries of the Brain

Elon Musk’s Plan To Merge Humans With A.I.

Will brain-computer interfaces transform human lives? | Inside Story

Be sure to ‘like’ us on Facebook


Please enter your comment!
Please enter your name here