Archive for the ‘I robot’ Category

Anyone has seen Big Hero 6 probably loves the movie. Anyone who’s seen the flick and actually works in robotics probably loves it ten times more. Unlike many (supposedly) historic blockbusters I’d rather not remember, these guys actually enrolled some very well robotics scientists as consultants and the benefits are obvious: most of the basic stuff is scientifically sound.

For a start, let’s focus on microbots. More specifically on swarms.

Swarm robots are (large-ish) groups of robots that work together. Their collective behavior results from local interactions between the robots and between the robots and the environment in which they act. This research field has been active for a long time now and Marco Dorigo is probably one of the best known scientists in it. Swarms work collectively and without explicit centralized supervision. This means that robots have a global goal and a set of rules to follow, but each robot makes decisions on its own. Hence, we get fault tolerant, scalable and flexible systems, i.e. it does not matter that a handful of robots are not working properly, the strength is in the number.

Obviously, if we want to work with a few hundred robots at a time, they need to be cheap, small and battery savvy (imagine you had to recharge 300 robots every few hours!). A good example of this are Rubenstein’s Kilobots (Harvard University).

rubenstein2small-1408024399454Instead of conventional motors, Kilobots make do with smartphone-like vibration motors, much cheaper, lighter and easy on the batteries. When these motors start to vibrate, they change the center of mass of the robot, actively displacing it forwards (imagine someone pushes you a bit when you are standing: you need to move to regain equilibrium, right?). This is actually the basis of a classic workshop for kids to build the simplest robot using a toothbrush and a smartphone vibration motor.

If you have two motors, one on each side of the robot, you can also rotate right and left by activating one or the other. Kilobots also talk with each other via infrared communication. Thus, they can calculate approximately where they are with respect to the rest. Using this information, they can collectively adopt any shape simply following three simple rules: edge-following, gradient formation and localization.


The system works as follows. We give ALL robots information about the shape they must adopt and fix a small number of (stationary) robots in a corner of the shape (that becomes our origin of coordinates, i.e. the 0 km of our reference). The rest of the robots will try to estimate their position within the shape with reference to these robots (i.e. the coordinate system). They also keep information about how many robots are between these static robots and themselves (gradient). Robots basically move by following the frontier of the global robot formation (edge following). They keep moving until they decide that they are within the boundaries of the desired shape and they stop when they detect that they are about to leave those boundaries or they collide with a robot with the same gradient value. After a while (unfortunately, several hours, unlike in Big Hero 6) robots manage to organize themselves into the desired shape. Taking into account that we are talking about more than 1000 robots and only these simple rules are used, this is quite a big deal.

So, yup, just planar shapes and quite slow with respect to the movie, but definitely in the same line!

More information on IEEE Spectrum and How Stuff Works

Anyone else there loved the X Files (at least during the first few seasons, before it became a closed loop ..)? I think I particularly enjoyed the first season, because every episode was a reference to some B-series horror movie. I think they made at least two approaches to Carpenter’s The Thing and one of them was Firewalker:

In this episode, some volcanologists go missing during an expedition and an exploring robot sends some creepy video feedback that caught Mulder’s flimsy attention. I’m not going to focus on the X File itself, but instead on the robot inside the volcano. Basically, because it is as real as it gets.


If I recall the episode correctly, the robot in X Files was similar to italian Robovolc, which is an all terrain research robot funded by the European Union ICT program from 2000 to 2004. Robovolc, however, was just expected to explore volcanic areas, not to go inside the crater. The tracks were appropriate to move on lava flows, ash and spatter cones and large ground fractures, but if the robot rolled over inside the volcano, it would be over for it and recovery might be tricky at the very least in such an environment.


Reportedly, the best robots to cope with uneven terrains when rolling over may become a serious issue are legged ones (or mesh robots, like Tet-Walker, but those are still on the design table). In here, for example, one can watch Big Dog fall, roll over and get on its (4) feet again. This skill is crucial if a robot is meant to be dropped on parachute over a dessert or, case in hand, rappelled into a volcano.

Dante-cover (1)

Dante was developed in spider shape by the NASA precisely to roam a volcano from the inside and send video feedback home. Also as a local test for alternative planet explorers to Sojourner, Spirit and Opportunity, one would guess. Needless to say, Dante was christened after the Divine Comedy, since it was supposed to descend into hell.

Using its tether cable anchored at the crater rim, the robot descended into craters to gather and analyze high temperature gasses from the crater floor. Exactly like in the X Files episode. Furthermore, Dante I was built at Carnegie Mellon University around 1992, so the writers of the show probably used it as reference. In 10 months, Dante I had descended into an active volcano, Mount Erebus, in Antarctica. Eventually, the communications tether failed and the mission ended prematurely after only 20 feet, but the robot actually worked. Indeed, CMU developed Dante II, a second tethered walking robot, which explored the Mt. Spurr (Aleutian Range, Alaska) volcano in July 1994. Dante II worked fine (660 feet into the crater) until crashed by a huge rock on its way out. Given that the $1.8 million project remains buried there, it is comprehensible that they did not try again, even though these robots were pretty awesome.

Later experiments like RoboVolc or rackWalker-II settled for out-of-the-volcano exploration Maybe in a future smaller/cheaper spiders can be developed to go inside again. Unfortunately, there are two main problems to do smaller volcano-exploration robots: i) smaller legs can not cope with large obstacles like rocks and ground fractures; and ii) the equipment required to analyze gas and chemicals and to gather samples tends to be bulky. In the meantime, we have to settle with Dante’s videos 😦


As if I didn’t think already that every robotic engineer has a nerd (not so) deep inside …


You are never going to guess what Festo robotics has recently designed! You did? Drats! Maybe Doc Ock shot was a giveaway and whatnot. They claim that they got the idea from elephant trunks. I beg to differ, but whatever.


Originally, in Spiderman comics Doc Ock’s arms were simply tools, controlled by his brain. In Spiderman 2, however, it is stated that they are equipped with microcontrollers to gain some autonomy (and, hence, become robots). Believe it or not, this approach is actually scientifically sound. Shared control (or shared autonomy) is traditionally used when a human needs to control something very complex. In order to reduce the required mental load, the equipment to be controlled is robotized (adding sensors and a processor) and allowed to make some decisions on its own. Think, for example, of the stabilizers of a drone team. Big decisions go to the human, but other aspects go to the system instead.

The key novelty with the new octo-arms (besides looking simply cool) is that they are capable of learning (and, one would expect, safeguarded against turning evil). Instead of carefully designing the algorithms to move one way or another, they simply mimic how elephant trunks move and learn.

Learning by imitation is actually not that simple. First, motion of whatever the robot is going to mimic needs to be analyzed and parametrized. Then, obtained motion coordinated need to be translated into the body structure of the robot. Inverse Kinematics -also used to map whatever an actor does into a virtual character- usually do the trick for you. Lets think for example of a robot arm with an elbow. Each articulation in the arm (e.g. shoulder, elbow, wrist …) is equipped with a motor that allows only a set of movements. For example, human elbows only bend in a specific way and wrist will only rotate that much (unless you are in a Steven Seagal movie). Each motor is consequently represented by a set of equations that define allowed movements. All motors as a whole define the arm kinematics.



Now, if we have a sensor that determines the location in 3D space of an object we want to grab, we need to move the tip of the arm towards that location, but the tip does not move alone: all motors in the arm need to operate together to reach the goal. This, obviously, become a system of equations that needs to be solved. Now imagine a Doc Ock arm with dozens of motors that need to move in synchrony. This is how complex obtaining an analytical solution to the problem is.

In order to make the process faster, we could actually observe how a real arm operates when a person wants to reach a given object. Observation gives us an idea about the relative position of all the joints in the arm, so we can limit drastically the complexity of our system. Although objects won’t usually be in the same position and all, the robot processor can make small adjustments to adapt to its current needs. This process is call learning by imitation and it is widely used in bio-inspired robots, like the Essex fish.

Wanna know more? Visit Maja Mataric’s website on Learning by Imitation