Archive for June, 2013

My fav Star Wars movie is definitely The Empire Strikes Back for a lot of reasons, including a very epic -although cliffhanger-y- final for every main character in the saga. The infamous “No, I am your father” sentence between Luke and Vader has become legendary by now, and it certainly helps action that he had his hand cleanly cut by a light saber not  a minute earlier. One would have thought that Luke’s career as a swordman was over, but, hey, it is the future after all! Just before the end credits, he’s already flexing a bionic one.

Whereas this was science fiction back in the 80s, at the moment prosthetics are improving fast. Whereas the most one could expect 30 years ago was a rigid plastic looking hand, at the moment technology is searching for features like artificial realistic-looking skin, providing mobility to every piece of the hand and, in extreme, brain control of  the prosthetic.

The main challenges of bionic limbs are precision tasks that need to be carrier with very specific force and finger positioning. Particularly, gripping is not easy for a robotic hand, because an excess of force will break or damage whatever we are trying to catch, whereas we won’t be able to grab it if we don’t use enough force. Of course, the solution to the problem is feedback: we start gripping with a reasonable strength and increase it gradually depending on the force we perceive in the fingers. In fact, this problem has been solved for surgery robots, which can now press a knife against skin with just enough force not to pierce it or cut a watermelon in half if they want to.

i-limb-ultra-revolution-18

Most commercial devices, like  i-limb ultra prosthetic hand from Touch Bionics, are operated by behavior based software: they offers a range of preprogrammed skills to assist in daily tasks that the user can select on a need basis.

The idea is quite simple. Each time we decide to perform some task with our hand, some neurons in our brain are triggered and our arm muscles release some electric impulses. The first input can be captured via Brain Computer Interfaces, whereas the second can be captured via electromyography, i.e. electrodes inserted in our limbs. Since captured patterns are very similar if we want to perform the same movement, if we capture enough of these patterns, at some point we’ll be able to split them into clusters depending on the action we want to perform and, hence, command an artificial hand to actually do it for us. The hand is controlled by a microcontroller that receives the action to be performed and decomposes it in the sequence of commands to the different motors of the hand required to accomplish the desired action. These sequences are pre-recorded in the microcontroller and it just needs to adjust their parameters to the environment: feedback from any force sensor in the hand is used to control how much we close the finger, how much strength we apply, etc.

i-limbultra-02

There are other commercial prosthetics already in the market, like Bebionics‘ arm. It has several buttons for behavior selection, but if you follow the video for a while, you’ll check how it obeys electric impulses from the muscles in the arm (myoelectric impulses)  and, after calibration, it allows precision tasks like tying up shoe laces.

We’ve seen it in 1966 film A Fantastic Voyage, and again in Joe Dante’s Innerspace in the 80s: some guy in a fancy capsule gets shrunk and injected into a human body and, voila, here we are, sailing in the blood stream and fighting white blood cells space-invaders style.

While science is not any closer to reduce human size further than your average diet would, we have in fact a good chance to actually sail the blood stream using .. nanobots! Listen to the Powerpuff Girls for the best definition I’ve found so far … 😀

Do you fancy some tiny thing with a mind of its own moving inside your body? Well, you might if you need it. Specially is the option is major surgery of the difficult kind. Of course, we all know that medicine has evolved a lot. Instead of the classic cut and sew approach, surgeons may now use a probe with a tiny camera to get into our body and work inside  to cope with micro-problems. However, there are narrow areas where catheters might puncture an artery wall, or areas that are way too maze-like to reach.

Similarly, if we have to interact with a large number of places spread in large areas of the body (think, for example of cancerous cells), it makes more sense to send tiny robots in a search and destroy mission than to try to find, isolate and extract every single cell or use more agressive treatment like chemo or radiotherapy.

nanobots

In these cases, nanobots may come handy: think of a tiny smart device capable of travelling through your veins to whatever destination is necessary to perform maintenance at microscale. Cool, right? In the video below, they offer animations of the different possibilities of nanobots in microsurgery.

There are many problems related to nanobot construction, but, since technology allows development of really, really tiny chips, the two major ones nowadays are batteries and motors, since they can not be reduced as much as processing units. The first solution to these challenges are offered by bio-bots or smart molecules. These nanoparticles are reportedly capable of navigating towards defined goals for precise drug delivery. In practice, though, they are more often than not swept out of the bloodstream and into the liver,

bind-014_illo

It has been reported in Science Translational Medicine, though, that BIND-014 might do the trick. Originally developed at MIT and currently commercialized by BIND therapeutics,  BIND-014 core is a polymer that slowly releases the chemotherapy drug docetaxel. Its surface is covered with small molecules, some of which are used to fool our immunological bodyguards, whereas others bind with a particular protein found on prostate tumors and on the newly forming blood vessels that feed the growth of other types of solid tumors. Apparently, in tests animals receving docetaxel via these nanobots presented a (localized) concentration 1000 times higher than the rest.

Not thrilled with your nanobot being yet just another fancy molecule? We’ve got cyborgbots too. Professor  Sylvain Martel, at the École Polytechnique de Montréal, in Canada, has already created nanobots using  live, swimming bacteria coupled to polymer beads and injected through the carotid artery of a living pig. These guys travel at 10 centimeters per second (360 m/h, which is not a lot, taking into account that our circulatory system is almost 100000 km long) and have been tracked using magnetic resonance imaging (MRI) after addition of magnetic particles. They move using  tiny corkscrew-like tails, or flagella.

Still disappointed for the lack of your good ol’ motors and circuits? No prob, they are also on the way. Professor James Friend, from Monash University has published in the Journal of Micromechanics and Microengineering that he might have a solution to our huge battery/motor problem. The idea is to use a piezoelectric motor to rotate the nanobot flagella and move it. Piezoelectricity is generated when piezoelectric materials suffer major stress … yes, exactly like what a tiny thing would suffer inside the turbulent blood flow. If you’ve ever kayaked downstream in rough waters, you’ll know what I mean. Current piezoelectric nanomotors still need some battery support to do their thing, but  researchers are -as usual- optimistic to this respect.

In the meantime, Sylvain Martel has managed to command a legion of bacteria to carry his nanobots around. No, I’m not joking, he published it at the IEEE 2008 Biorobotics Conference and even filmed it in video. Behold, ye biological slaves!!

Johnny_Mnemonic-209415970-large

Before he appeared in Matrix, Keanu Reeves already had a name in scifi flicks. Most of you probably recall cyber-punk movie Johnny Mnemonic: “320 Gb of stolen data wetwired directly into his brain”. We are not there yet, but there have been some advances into wetwiring things into the brain that deserve a post.

Facts are that the University of Southern California has reported the possibility of an implant available to patients in five to 10 years that might help people with localized brain damage, like patients after a stroke. The idea follows a black box model, like cochlear implants do: scientists study the brain to check how memories are stored. This process involves the activation of sets of neurons specifically in the hippocampus, where short-term memories become long-term ones. A sequence of activations triggers another set of neurons in a healthy area. It is not necessary to understand why, just to associate the input and output areas. Damaged areas, however, can not generate the input sequence. Hence, a set of electrodes and a control chip (to activate them in a particular order) are inserted into the damaged area to replicate what it can not do anymore. If everything works right, the brain does not mind whether the input neurons activated themselves or where triggered by electrodes: it will respond equally to both stimuli as long as both input sequences are the same. Think, for example, of Internet access in your smartphone: if you’are at home, the phone is most likely connected to your WiFi, whereas on the street it will be connected via 3G or GPRS. One most likely ignores how WiFi or 3G work on the inside -and does not really care-, as long as they replace each other appropriately to grant that we receive IMs in our cell phone. Similarly, it won’t matter if it was our brain cells or a chip who did the trick as long as we can recall where we left the bloody keys.

MemoryActivation

The idea of implanting a device in our brains to deliver current to our neurons may look a bit queasy, but in fact there are already similar devices working today to treat epilepsy (see image below) and Parkinson. The problem is actually that the current version of the hardware required to do the trick is by no means tiny at the moment, so we are at least 10 years away from the plug and play version of this technology.

Health_0222_Epilepsy_480x360

One could still be skeptical about the capacity to map sequences of activation/deactivation in something so small and numerous as neurons. However, researchers at MIT and Georgia Tech have reported an automatic process to find and record such information in the living brain. They propose to use a robotic arm guided by a cell-detecting computer algorithm to do the task with micrometer accuracy. The arm moves a pipette in two-micrometer steps to detect cells, preventing it to poke through the membrane. Then, a electrode can break through the membrane in a safe way to record its internal electrical activity. They are now are now working on scaling up the number of electrodes to record multiple neurons at a time and see how they work together. Their ultimate goal is to classify the thousands of different types of cells in the brain, map how they connect to each other, and figure out how damaged cells differ from normal ones.

Combining these two scientific outcomes, one can see how they plan to bypass the hippocampus. We are probably not wetwiring gigabits to our brains, but we are probably one step closer to dreaming with electric sheep.

See more about Ted Berger’s research on his website.
See more about the robotic arm on MIT News.