Brain-Computer Interface Technology Applications in Space

By A. S. Deller | 27 November 2020

The Canadarm2 robotic arm with its robotic hand Dextre. (Credit: NASA)

Brain-computer interfaces (BCIs) are one of the major pillars behind the idea of transhumanism: extending human capabilities and even life beyond our biological bodies and minds. The most basic definition of a BCI is of a direct connection between a human brain and some device outside of the body, like a computer, wheelchair, or robotic appendage. This interface is accomplished by the monitoring of electrical signals sent between neurons and the translation of those signals into computer code that then becomes an action that might otherwise be done by typing commands or otherwise manipulating the external device.

Some of the earliest and most exciting uses of BCIs – also known as brain-machine interfaces and neural-control interfaces – have been in medical applications, particularly in helping people who are paralyzed interact with the outside world. This work continues and will always be extremely important to pursue, but there are many other potential applications for BCIs. In fact, the sky may not even be the limit for them…

While BCIs aren’t an extremely recent invention, in-depth research on how this technology might be implemented in space is relatively new. A 2008 paper by Cristina de Negueruela et. al. was some of the earliest published work to dive into experiments that showed how the electrical activity of a human brain could directly administer robotic controls. The team states:

“Such a kind of brain-computer interface is a natural way to augment human capabilities by providing a new interaction link with the outside world and is particularly relevant as an aid for paralyzed humans, although it also opens up new possibilities in human-robot interaction for able-bodied people. One of these new fields of application is the use of brain-computer interfaces in the space environment, where astronauts are subject to extreme conditions and could greatly benefit from direct mental teleoperation of external semi-automatic manipulators…”

The team goes on to show how, by utilizing direct signals from the brain, commands can be delivered that avoid the latency and output delays common with manual controls.

Indeed, one of the overarching advantages of BCIs has always been “cutting out the middle-man”. For all of the 20th century, any kind of electrical and computer-based tool or device relied on three distinct nodes: the human whose intent needed to be transferred, the assisting device used to transfer that intent (i.e. a video game controller or computer keyboard and mouse) and the device receiving the intent. BCIs allow us to send that intent straight to the last node in that chain in much the same way that we make our hand reach out to flip a pancake on the griddle or do a push-up.

The electrical signals moving 250 miles per hour between our neurons, carried across dendrites and axons – although insulated by myelin sheaths – actually, leak from their pathways slightly. These electrical discharges are just enough to be picked up by electrodes. Whether worn on the skin or implanted directly onto the outer surface of the brain, such signals are easily replicated and, therefore, studied, and can then be mapped to various thought intentions. In a sense, the software can now be written to “read” a mind at the highest level – just enough so that we no longer require a helping hand between us and the machine.

In the early 2000s, NASA astronauts took part in a series of experiments dubbed “Neurocog”. This research demonstrated that even though humans experience changes in brain-wave activity involved with visual perception when in a microgravity (zero-G) environment, these changes did not affect higher brain functions like problem-solving. Ultimately, the Neurcog research showed that early types of BCI monitoring of neural signals worked in space and that astronauts’ decision-making abilities are unaffected by living conditions in microgravity.

A key component of BCIs, and something that will be required for astronauts to use in potentially controlling robotic appendages and other assistive devices while performing critical and dangerous work in space, is RBSD: Real-time Brain Signal Decoding. This is the basis for the cognitive monitoring that allows BCIs to provide inputs transferred directly from a human brain to whatever device it is connected to. Christien Kothe and Thorsten O. Zander examined this and used it to describe the idea of “passive BCIs” in their paper published in the Journal of Neural Engineering in 2011:

“In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper, we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users’ intentions, situational interpretations, and emotional states to the technical system. We call this approach passive BCI.”

The passive BCI concept is now core to many BCI applications. The innovation has brought us ever closer to crafting BCIs that function more like a natural extension of a human nervous system (albeit an unnatural one) rather than something that requires more concentrated thought.

Space-based robotics is now primarily used for very few purposes: securing and positioning objects via remote manipulator systems (RMS) and exploring and collecting via remotely operated vehicles (ROV). These will continue to evolve in functionality and complexity of the tasks they can do, especially in the realm of machine learning and artificial intelligence. They will also become extensions of human bodies and minds as BCIs advance. In a not-too-distant future, a single astronaut may be mentally connected to most major systems of a spacecraft – able to control ROV, RMS and even navigation at will merely by thought.

In a 2013 paper by Caterina Cinel, et. al., a research team consisting of computer engineers from the University of Essex and partnering with NASA’s Jet Propulsion Laboratory looked specifically at the possibilities of BCIs being used to aid in the space navigation. This was done rather simply, with a non-invasive BCI that allowed the subject to control a 2D printer:

“Our system relies on an active display which produces event-related potentials (ERPs) in the user’s brain. These are analyzed in real-time to produce control vectors for the user interface. In tests, users of the simulator were told to pass as close as possible to the Sun. The performance was very promising, on average users managing to satisfy the simulation success criterion in 67.5% of the runs.”

Cinel’s team went on to study a collaborative mode that combines the ERPs of two subjects simultaneously. This system actually produced trajectories that were significant improvements over those generated by a single user, which points toward the possibility of BCIs that actually join an entire crew of a spacecraft into a “super-mind”.

Brain-computer interfaces have a long way to go before they are a commonplace technology used by everyone. Based on the technology’s promise here on Earth and in space, it is safe to say BCIs will be a significant part of the technology landscape of the future. Luckily, new venture capital firms like SP8CEVC are growing to help support the innovators behind such important technologies, and are working to help ensure this future.

Reprinted with permission from the author.

A. S. Deller is a Sci fi, Fantasy and Science writer. Follow him at Medium and Twitter.

New Brain Computer interface technology | Steve Hoffman | TEDxCEIBS

Brain-Computer Interface – Mysteries of the Brain

How Will Elon Musk Connect The Brain To A Computer?

Will brain-computer interfaces transform human lives? | Inside Story

Be sure to ‘like’ us on Facebook


Please enter your comment!
Please enter your name here