Giant robots vs giant monsters? I knew I’d like Pacific Rim even before I set foot at the theater. And I was not disappointed, mind you.
One of the things that kept me thinking was the Jaeger control interface: robots were controlled by two human brains (the stress seemed to be too much for one alone). In fact, to show how both brains were linked, they repeated the same movement and the robot mimicked it -as if it was operated using electromyography instead-. In action movie terms, this means that pilots need to be good fighters and compatible to control the robot. In reality?
In fact, it possible to control systems with the brain. This field, widely known as Brain Computer Interfacing (BCI), is extensive, but has offered irregular results. In most cases, scientists do use electro-encephalography (EEG) because they don’t have to open volunteers’ heads and poke things inside. The main problem of EEG, though, is that it provides a large amount of data with a very poor signal to noise ratio, meaning that it is actually as difficult to extract patterns from captured signals as to find the proverbial needle in a haystack. Consequently, rather than looking for very precise commands, researchers mostly quantify a reduced number of bins -sometimes labeled as mental states- to choose among a limited number of options. A typical example is trying to move a square in a screen in any of the four dominant sides -right, left, up or down-.
However, in order to fit clearly in one of these bins, the user must keep a state of continuous awareness to adequately maneuver the wheelchair. Think, for example, about juggling with several balls while following a lively conversation at the same time. Obviously, fine control here is analogous to voice-based fine control, only harder, something that could lead to excessive mental load and exhaustion.
Assuming that a person can not be concentrated 24/7, some researchers found a different technique that might do the trick: rather than clustering existing signals, it is also possible to provoke a strong one and detect it. The chosen one is usually the P300 evoked potential. This natural, involuntary response of the brain to infrequent stimuli is coherent to an oddball paradigm, where a random sequence of stimuli is presented, only one of which interests the subject. Around 300ms after the target flashes, there is a positive potential peak in the EEG signal, which can be reliably detected and related to the interesting stimulus. The P300-based BCI requires almost no user training and only a few minutes to calibrate the detection algorithm parameters and have been used for tasks like wheelchair control or instant messaging.
We are not even close to brain controlling something like a giant biped robot, of course. But, what if we actually use several brains to achieve the task? Is it even possible?
The answer seem to be yes … up to a point. After success in linking two rat brains to achieve a cooperative task, scientists at the University of Essex published a paper on “cooperative brain-computer interfaces”. Apparently, they equipped two persons with EEG helmets and askd them to mind control a simulated spaceship. Don’t get your hopes too high: the ship could only be commanded in 8 different directions and, still, a human with a simple joystick outperformed them every time. However, researchers from the University of California also found that the accuracy of EEG predicting an arm-reaching motion improved dramatically by fusing the EEG signals from groups of 5, 10, 15, and 20 people. Although it looks like we won’t be mind-controlling our own giant robot to the market, the idea of putting heads together for a boost seems to be promising at the moment.