Posts Tagged ‘Robot’

Anyone has seen Big Hero 6 probably loves the movie. Anyone who’s seen the flick and actually works in robotics probably loves it ten times more. Unlike many (supposedly) historic blockbusters I’d rather not remember, these guys actually enrolled some very well robotics scientists as consultants and the benefits are obvious: most of the basic stuff is scientifically sound.

For a start, let’s focus on microbots. More specifically on swarms.

Swarm robots are (large-ish) groups of robots that work together. Their collective behavior results from local interactions between the robots and between the robots and the environment in which they act. This research field has been active for a long time now and Marco Dorigo is probably one of the best known scientists in it. Swarms work collectively and without explicit centralized supervision. This means that robots have a global goal and a set of rules to follow, but each robot makes decisions on its own. Hence, we get fault tolerant, scalable and flexible systems, i.e. it does not matter that a handful of robots are not working properly, the strength is in the number.

Obviously, if we want to work with a few hundred robots at a time, they need to be cheap, small and battery savvy (imagine you had to recharge 300 robots every few hours!). A good example of this are Rubenstein’s Kilobots (Harvard University).

rubenstein2small-1408024399454Instead of conventional motors, Kilobots make do with smartphone-like vibration motors, much cheaper, lighter and easy on the batteries. When these motors start to vibrate, they change the center of mass of the robot, actively displacing it forwards (imagine someone pushes you a bit when you are standing: you need to move to regain equilibrium, right?). This is actually the basis of a classic workshop for kids to build the simplest robot using a toothbrush and a smartphone vibration motor.

If you have two motors, one on each side of the robot, you can also rotate right and left by activating one or the other. Kilobots also talk with each other via infrared communication. Thus, they can calculate approximately where they are with respect to the rest. Using this information, they can collectively adopt any shape simply following three simple rules: edge-following, gradient formation and localization.


The system works as follows. We give ALL robots information about the shape they must adopt and fix a small number of (stationary) robots in a corner of the shape (that becomes our origin of coordinates, i.e. the 0 km of our reference). The rest of the robots will try to estimate their position within the shape with reference to these robots (i.e. the coordinate system). They also keep information about how many robots are between these static robots and themselves (gradient). Robots basically move by following the frontier of the global robot formation (edge following). They keep moving until they decide that they are within the boundaries of the desired shape and they stop when they detect that they are about to leave those boundaries or they collide with a robot with the same gradient value. After a while (unfortunately, several hours, unlike in Big Hero 6) robots manage to organize themselves into the desired shape. Taking into account that we are talking about more than 1000 robots and only these simple rules are used, this is quite a big deal.

So, yup, just planar shapes and quite slow with respect to the movie, but definitely in the same line!

More information on IEEE Spectrum and How Stuff Works

Anyone else there loved the X Files (at least during the first few seasons, before it became a closed loop ..)? I think I particularly enjoyed the first season, because every episode was a reference to some B-series horror movie. I think they made at least two approaches to Carpenter’s The Thing and one of them was Firewalker:

In this episode, some volcanologists go missing during an expedition and an exploring robot sends some creepy video feedback that caught Mulder’s flimsy attention. I’m not going to focus on the X File itself, but instead on the robot inside the volcano. Basically, because it is as real as it gets.


If I recall the episode correctly, the robot in X Files was similar to italian Robovolc, which is an all terrain research robot funded by the European Union ICT program from 2000 to 2004. Robovolc, however, was just expected to explore volcanic areas, not to go inside the crater. The tracks were appropriate to move on lava flows, ash and spatter cones and large ground fractures, but if the robot rolled over inside the volcano, it would be over for it and recovery might be tricky at the very least in such an environment.


Reportedly, the best robots to cope with uneven terrains when rolling over may become a serious issue are legged ones (or mesh robots, like Tet-Walker, but those are still on the design table). In here, for example, one can watch Big Dog fall, roll over and get on its (4) feet again. This skill is crucial if a robot is meant to be dropped on parachute over a dessert or, case in hand, rappelled into a volcano.

Dante-cover (1)

Dante was developed in spider shape by the NASA precisely to roam a volcano from the inside and send video feedback home. Also as a local test for alternative planet explorers to Sojourner, Spirit and Opportunity, one would guess. Needless to say, Dante was christened after the Divine Comedy, since it was supposed to descend into hell.

Using its tether cable anchored at the crater rim, the robot descended into craters to gather and analyze high temperature gasses from the crater floor. Exactly like in the X Files episode. Furthermore, Dante I was built at Carnegie Mellon University around 1992, so the writers of the show probably used it as reference. In 10 months, Dante I had descended into an active volcano, Mount Erebus, in Antarctica. Eventually, the communications tether failed and the mission ended prematurely after only 20 feet, but the robot actually worked. Indeed, CMU developed Dante II, a second tethered walking robot, which explored the Mt. Spurr (Aleutian Range, Alaska) volcano in July 1994. Dante II worked fine (660 feet into the crater) until crashed by a huge rock on its way out. Given that the $1.8 million project remains buried there, it is comprehensible that they did not try again, even though these robots were pretty awesome.

Later experiments like RoboVolc or rackWalker-II settled for out-of-the-volcano exploration Maybe in a future smaller/cheaper spiders can be developed to go inside again. Unfortunately, there are two main problems to do smaller volcano-exploration robots: i) smaller legs can not cope with large obstacles like rocks and ground fractures; and ii) the equipment required to analyze gas and chemicals and to gather samples tends to be bulky. In the meantime, we have to settle with Dante’s videos 😦


Ripley style!

Posted: April 19, 2013 in I robot
Tags: , , , ,

Have you ever met an alien queen and wanted to shout “Get away from her, you b*tch!”? Well, probably not, but if you’ve seen the movie, at some point you’ve probably been curious about how an exoskeleton would feel.

In fact, exoskeletons are already a (semi)functional reality, mostly because they attracted the attention of the most powerful public investor in the world: the military. Equipping soldiers with an all powerful exoskeleton is not new: Heinlen already introduced the idea in 1959 in his novel Starship Troopers and mechas became an instant classic in japanese manga and anime. The latest one would obviously be Tony Stark’s super-cool (and flying!) Iron Man suit. These suits allowed the soldiers to lift major weights and jump long distances and this is basically what DARPA requested a decade ago in one of their (well funded) robotic challenges: an exoskeleton to allow lifting of up to 400 pounds (180 kg) without a sweat.


Activelink exoskeleton grasper. Ring any bells? 🙂

Truth to be told, the winning prototype XOS 1 -by Sarcos– “only” lifted 90 kg, and allowed the user to run 10 mph, but was promising enough for Sarcos to be funded and, later, purchased by Raytheon.

XOS roughly solved all the initial problems related to building an exoskeleton up to a point.

-Power feeding: if your (motorless) smartphone battery goes dead after a day, how many of those would take to move a (multimotor) metal structure with you inside? Don’t think too big, because all those batteries add to the weight of your suit. It wouldn’t do to need a supersuit to actually lift the weight of your … supersuit, right? The goal of these suits is to have juice for at least 24 hours, but thus far most of them last for a few hours only.

-Human/Suit interface: do you find it hard to achieve those pesky combos using your XBox 360 controller?  well, just imagine doing that with a dozen motors and joint at the time, knowing full well that meeting the ground face first firmly depends on your driving skills. Even if you are a combo master, you are not supposed to be paying attention to a joystick when you are in the middle of a battlefield, right? so everything should be more intuitive. And, also, the lag between the command and the reaction should be minimized, otherwise you would feel like you are trying to run through water.

-Safety is the most important issue, both for the suit user and the people around. If you want to lift 190 kg, it’s right to be superstrong, but if you unwillingly collide with a fellow nearby, you don’t want him reduced to a bloody pulp, right? Besides, if there’s a glitch in motion control in your suit, you don’t want powerful motors to rip your arm off because the suit doesn’t know that the human elbow does not bend the other way. These problems are basically solved using force feedback (haptics), inverse kinematics (telling the robot how a human moves) and proper sensors but everything needs to be super-tuned if we are actually wearing it.

The XOS had 30 actuators to control its 30 different joints and used  hydraulic cylinders (like our car brakes) to power them. Feedback was based on keeping a number of contact points with the joints and mimic whatever the user does (improved force sensors), like when we try to teach a kid to dance by balancing their feet on our own. A computer was in charge of translating the user input into motory output.



Later, Raytheon bought Sarcos and produced the XOS 2, a suit powered by an internal combustion hydraulics engine with electrical systems that uses lighter material and is about 50% more energy efficient than the XOS 1. It has processors on every joint and its actuators are reported to deliver about 200kg of force per square centimetre using pressurised hydraulics.

There are two other major potential investors in this field: space and health.



NASA, for example, believes that exoskeletons might be just the thing to keep a fit muscular tone in space, where things don’t weight anything and one hardly exercises at all. In this sense, their 57-pound  X1 robotic exoskeleton is meant to inhibit movement in leg joints, although it can also assist it in case some angry, giant spider-bug crawls into the space station (or we want to use it for assistance on Earth, which is more boring, but also way more likely).  X1 has 10 degrees of freedom, or joints 4 at hips and knees and six passive ones for sidestepping, turning and pointing, and flexing a foot)

If someone fancies an Iron Man suit instead, Trek Aerospace is developing the Springtail Exoskeleton Flying Vehicle, an exoskeleton frame with a jetpack expected to fly up to 70 miles per hour (112.6 kilometers per hour) and hovering motionlessly thousands of feet above the ground as well.

The most down-to-earth application of exoskeletons is, however, assisted motion. If your limbs or back  won’t support you, these thingies could free you from the wheelchair and preserve your autonomy (at least as long as the batteries last).

Cyberdine -unrelated to the infamous company that created evil Skynet- is a japanese company that commercializes the Hybrid Assistive Limb, or HAL (also unrelated to the also evil HAL computer in 2001, A Space Odyssey; these guys have a gift with names). HAL is a power-assisted pair of legs, and the company also developed similar robot arms.



Unlike the previous designs, HAL is meant to help the elderly with mobility or help hospital or nursing carers to lift patients (nowadays you need a fairly bulky mobile minicrane to do this if the patient can’t help at all). HAL is operating at 150 hospitals nowadays and suits are leased at $1,950 per unit per year approximately.

However, the most novel feature in HAL is not its purpose, but its user interface: Cyberdine claims that HAL can be operated with the brain. In fact, according to their website, they do not detect EEG signals, but bio-electrical signals that appear in the limbs when the brain decides it’s time to move them (actually, this approach is closer to electromyography, but without the needles and the pain).  Given a large enough number of sensors to detect the appropriate number of currents, each combination of readings is associated to a different motion, that is fed to the actuators (e.g. if all fours sensors at hip,  shoulder, elbow and wrist are in motion, maybe you’re throwing a punch, if the hip sensor is off, maybe you’re simply trying to reach something). Since it is highly unlikely that sensors work correctly all the time, incomplete information is filled by onboard computers, in a similar way that isolated words can be concatenated to form a meaningful sentence.