Archive for February, 2013

“Your boy’s suit I designed to withstand enormous friction without heating up or wearing out, a useful feature. Your daughter’s suit was tricky, but I finally created a sturdy material that can disappear completely as she does. Your suit can stretch as far as you can without injuring yourself, and still retain its shape. Virtually indestructible, yet it breathes like Egyptian cotton” (Edna Mode)

Have you ever wondered what supehero suits are made of? Unstable molecules and whatnot are very 80s and kevlar is too Die Hard. Nowadays, it’s better to use carbon nanotubes, at least according to Batwoman … But can we really make a super strong light thin superhero  suit using this stuff?

Batwoman-6-int

Nanotubes (or buckytubes) are part of the fullerene family (molecules composed entirely of carbon) and similar in structure to graphite. Specifically, nanotubes have a cylindrical nanostructure with an enormous length-to-diameter ratio. They are basically a long, hollow structure with one-atom-thick walls, that can be stacked in the case of multiwalled nanotubes (MWCNT). Their specific properties depend on the angle the walls are rolled at (chiral) and their radius.

s10

What makes nanotubes so interesting, though? Basically, they are the strongest and stiffest materials yet discovered, meaning that they can be stretched and pulled a lot before breaking. This is mostly due to the nature of their chemical bonding (sp2) which is stronger than those of diamonds (sp3). They can also resist major pressure, have interesting (changing) electrical properties, show optical and EM absorption and good thermal conductivity. Too bad they also seem to be toxic if exposure is long enough.

Most applications of nanotubes, given their stiffness and strength, involve building very hard, but light structures (from vessel components to a hockey stick). However, despite its potential toxicity, there have indeed been some efforts to build bulletproof textiles with nanotubes. The key issue in this field is fiber spinning, which is actually feasible since 2000 (Paul Pascal Research Center in Pessac, France). Recent processes are based on the natural tendency of individual nanotubes to align themselves into ropes held together by van der Waals forces, so if some are pulled from a plane, others will try to realign with those. Ray H. Baughman of the University of Texas did the trick by driving a thread of aerogel into a nanotube substrate so that the stuff would align with the column (like a simplified Spidey alien suit!). Then, the aerogel is dissolved and, voila, you get a pure carbon membrane, light, flexible, ultrastrong and also an electrical conductor: super-light armor! Obviously, a suit of this stuff could stop bullets and knives, but there is still the problem of absorbing the strength: internal bleeding and broken bones would still be an issue.

nanotubes

Nanotube textiles have many other interesting applications, like heated fabrics, biosensing textiles or battery fabrics. Too bad nanotubes are still $1,000 per pound, right?

Advertisements

One of my (few) favorite moments in Ridley Scott’s Prometheus is the spaceship mapping using fancy flying spheres. Currently, it is feasible to have such devices (and it would be super-cool if they were flying spheres equipped with light beams, too). Drones (small autonomous flying machines) have been in use since the big war and, at this point, one can acquire a Parrot for a reasonable price. Sensors could be a problem at some point, but Kinect-like devices are actually close to what the movie shows.

Let’s start slow: the process of mapping an unknown environment as they do in Prometheus is called SLAM (Simultaneous Localization and Mapping) and, depending on available sensors, is quite solved (http://openslam.org/). If a person has to make a map of a corridor environment, one would probably start by counting long steps -let’s say approximately one meter each- to measure each corridor and then try to recall any bifurcation in it. If we know a bit about mazes, we’d probably use the rule of the right hand, meaning that we would turn into every new corridor on the right and, eventually, we would be out of the place with a (partial) map of the maze under one arm, too!. The main problem of this approach is, obviously, that our perception of distance is only approximate, so we are accumulating errors. In a straight corridor, it is not so important unless we want to match maps acquired by different people, but if there’s turning involved, errors grow faster and render the map useless after a short while. The figure on the left shows how a map would look to the casual observer when the data gathering platform does not correct localization errors, whereas the one on the right shows what happens when these errors are corrected.

withoutSLAMwith

If we could assume that all corridors are perfectly orthogonal, this would not happen, because we are certain that every time we start in a new corridor, it is exactly 90 degrees from the previous one. However, in an unknown environment no assumptions can be made.

In order to solve this problem, one could try to match what we have perceived thus far with what we are perceiving at the moment. If we keep correlating what we perceive with what we have stored in our memory, we are actually performing SLAM. In most cases, methods follow a statistical approach more or less derived from classic Bayes theory: the probability of observing structure A after moving some space is equal to the probability of having observed structure B over the probability of B actually being there. To take the simplest possible example, in a corridor where doors are separated exactly one meter, if we observe a door after moving ahead and we had observed a door just before, we have moved ahead exactly 1 meter. This process is explained in the figure:

tracking

Of course, things are not that easy if we do not know a priori about the spaced doors, as is the case, but we can still manage. For example, it is impossible to perform a 180 degrees turn and leave behind a straight corridor segment to find one forming 40 degrees with our heading if there are no corners. The most feasible explanation is that we have turned too much or too little. The most usual approach is to extract features of the environment from different locations and compare where they are with respect to us in both locations: e.g. if we see a window by our side in one location and it is just 2 meters behind us in another, we have moved exactly 2 meters … if we can take for granted that “exactly” that is.

There are many well known techniques that solve SLAM in robotics, mainly Montecarlo based solutions, and a few of them are available for open OS like ROS. However, all of them are based in more or less complex maths variations of the same basic idea (above).

Now, the key is to have the sensors necessary to acquire a 3D model of the environment on the run. In the movie, the spheres are apparently using laser beams, which are perfect to estimate the distance from the emitter to a point, but to achieve this we would need a delicate motor system to move those lasers around, since each laser reading only provides information about how far the projection of that laser on the closest surface is from the camera that captures its light. Obviously, we would need a laser capable of rotating around the flying sphere at full speed again and again and again, like a PET device. Corridors would be sliced into planes around the sphere and the surface of those corridors would be rebuilt from the readings of the laser in the 360 degrees around it at each plane. This system could be built, but it would be expensive and heavy, so a small sphere is not likely to have it equipped any time soon.

Nevertheless, once we know exactly where we are, it is not that hard to actually build a 3D model of the environment, as long as we can just detect the distance to nearby obstacles and map (captured) textures on top of planes. Not as fast as in Prometheus, though … yet.

t1000

I don’t know about you, guys, but my fav thing in Terminator 2 was probably the T1000 model, with the liquid metal structure doing all kinds of weird things. Wouldn’t it be cool to have a material like that? Unfortunately, at the moment we don’t, but morphing liquid metal does exist. And it is fairly common, too, in speakers and hard drives, for a start. This material is known as ferrofluid.

Ferrofluids are made up of tiny magnetic fragments of iron (nanoparticles) suspended in oil (often kerosene) with a surfactant to prevent clumping (usually oleic acid). In fact, they can be made at home (with care!) using discarded stuff like old audio or video tapes, acetona and finished toner cartridges, or buy it online, although it is a bit on the expensive side (around 100 USD per 8oz). The resulting colloidal suspension is very sensitive to magnetic fields. The idea is pretty simple: magnetic nanoparticles are attracted to the field, but can not clump, so they sort of cover the field center like a liquid layer. Besides, the surface will go all spiky, as nanoparticles will try to align themselves with the field just like  iron filings do with a magnet. If one moves the magnetic field, ferrofluid acts accordingly.

The main problem preventing us from building our very own T1000 from our folks’ ol’ video collection is that resulting shapes are quite unpredictable, not nearly solid enough and require constant manipulation of a magnetic field to bend them to our will. It is quite unlikely that we’ll build any human-like thing using ferrofluids in the near future , but in the meantime people is doing neat stuff playing around with the thing, like Sachiko Kodama‘s dynamic sculptures.