Archive for April, 2013

I’ve lost count of the number of TV series and movies where someone grabs a satellite image, a surveillance video or even an ATM shot and ask the local geek to “enhance” the picture. Suddenly they zoom into the eye of someone who, actually, was hard to spot in the original picture and get the shot of his killer printed in the retina … all of it thanks to the miracles of image processing and the magic fingers of the local nerd.

True or false? Really …

In fact, there are several flaws here that make the process unbelievable to anyone who has worked in image processing for a while. In brief, the (real) resolution of an image depends mostly on a single issue: optical resolution. Resolution depends on the distance between two distinguishable radiating points and it’s a measure of how well you (or a camera) really see. Trying to improve optical resolution in an image is about as feasible as trying to see a black cat in a coal mine in darkness: details are there, but you don’t get to see them.

To simplify, just get some book in your shelf and move away until you can no longer read the title. The title is still there, but your eyes are not able to clearly separate the characters from the background anymore. Of course, you can just step closer and you’ll read it back, but the motion is not improving your resolution, it’s just equivalent to changing your “zoom”. Indeed, optical zoom does for cameras what binoculars do for our eyes: it brings things closer so you can see them better with your (fixed) optical resolution. However, once the image is (digitally) captured, you can’t change the zoom anymore, because you can no longer manipulate the zoom lenses. So what is it exactly what they (or any average image processing software) do in the movies with the captured footage? The answer is simple: to change the digital zoom.

You probably have heard of digital zoom if you’ve bought a decent digital camera: they usually give figures like x15 optical x30 digital zoom and such. And if you know a bit about photography, you know that digital zoom doesn’t matter, in fact you can get it later at your home computer, the only one that counts is optical zoom. The reason is fairly simple: digital zoom simply takes two neighbour pixels and put a third between them. The color of the new pixel is the average of its neighbours. Hence, you can double the (pixel) size of the pic, even if you are not adding any detail that wasn’t already there: if I give you a stamp with the Parthenon and ask you to draw in in an A3 paper, you can probably do it, but you won’t get to see where the columns are more wasted or what details are depicted in the frontispicio (in part, because those are in the British Museum, but then again …), you just get the general details, only bigger.

One could argue that, in fact, when they use the digital zoom in their cameras they get better results that when they try to work the zoom later at home with PC software, and they are probably right, but only because pictures in your cameras are compressed before they are stored. If you go for digital zoom AFTER you do the compression, some details in the original image are already lost, but if you do it BEFORE, you won’t see anything that wasn’t already there thanks to good ol’ optical zoom. All in all, the truth behind superzoom is you can’t see what it’s not there.

You might have heard about superresolution techniques, but don’t be fooled: most methods rely on combining many low resolution images into a single high resolution one by extracting from a pic what the next one lacks and putting it all together, just like our eyes usually do. Although there are methods to do superresolution with a single image, at most, you can get 2x-3x zoomed images with acceptable detail if original capture conditions were nice enough (see this example). In the image below, you can actually improve the shape of the windows, but it’s unlikely you’ll recognize the face of someone in a window unless you use magic.


So is there no image processing technique that actually reveals stuff in digital images? In fact, you might be lucky with contrast enhancement techniques, which are equivalent to letting your eyes get used to darker areas so you can actually perceive shapes and such. If you capture too dark or too bright images, the scene information can still be there, only the human eye finds it hard to distinguish between a dark gray equal to 20 and one equal to 22 (if you represent illumination in a 0-100 scale as most digital color spaces)). The computer, however, has no trouble to separate a 20 from a 22, so actually the only thing you have to do to see things better is to tell the computer to repaint all pixels equal to 20 or less in 10 and all those equal to 21 or more in 30. Your eyes will most likely be able to distinguish 10 from 30. And, fortunately for us lousy photographers there is a tool in almost every image processing software called “Levels” that will do the trick for us.  The images below (taken from here) show the effect of this trick (called histogram stretching) for the pic on the left.


The main difference between contrast and resolution enhancement is that in the first case the information is contained in the picture. You can’t superzoom a camera image unless it has been captured at millions and millions of pixels … and, even if you work with Chloe O’Brian at the CTU, it’s not happening in your everyday traffic camera.


Ripley style!

Posted: April 19, 2013 in I robot
Tags: , , , ,

Have you ever met an alien queen and wanted to shout “Get away from her, you b*tch!”? Well, probably not, but if you’ve seen the movie, at some point you’ve probably been curious about how an exoskeleton would feel.

In fact, exoskeletons are already a (semi)functional reality, mostly because they attracted the attention of the most powerful public investor in the world: the military. Equipping soldiers with an all powerful exoskeleton is not new: Heinlen already introduced the idea in 1959 in his novel Starship Troopers and mechas became an instant classic in japanese manga and anime. The latest one would obviously be Tony Stark’s super-cool (and flying!) Iron Man suit. These suits allowed the soldiers to lift major weights and jump long distances and this is basically what DARPA requested a decade ago in one of their (well funded) robotic challenges: an exoskeleton to allow lifting of up to 400 pounds (180 kg) without a sweat.


Activelink exoskeleton grasper. Ring any bells? 🙂

Truth to be told, the winning prototype XOS 1 -by Sarcos– “only” lifted 90 kg, and allowed the user to run 10 mph, but was promising enough for Sarcos to be funded and, later, purchased by Raytheon.

XOS roughly solved all the initial problems related to building an exoskeleton up to a point.

-Power feeding: if your (motorless) smartphone battery goes dead after a day, how many of those would take to move a (multimotor) metal structure with you inside? Don’t think too big, because all those batteries add to the weight of your suit. It wouldn’t do to need a supersuit to actually lift the weight of your … supersuit, right? The goal of these suits is to have juice for at least 24 hours, but thus far most of them last for a few hours only.

-Human/Suit interface: do you find it hard to achieve those pesky combos using your XBox 360 controller?  well, just imagine doing that with a dozen motors and joint at the time, knowing full well that meeting the ground face first firmly depends on your driving skills. Even if you are a combo master, you are not supposed to be paying attention to a joystick when you are in the middle of a battlefield, right? so everything should be more intuitive. And, also, the lag between the command and the reaction should be minimized, otherwise you would feel like you are trying to run through water.

-Safety is the most important issue, both for the suit user and the people around. If you want to lift 190 kg, it’s right to be superstrong, but if you unwillingly collide with a fellow nearby, you don’t want him reduced to a bloody pulp, right? Besides, if there’s a glitch in motion control in your suit, you don’t want powerful motors to rip your arm off because the suit doesn’t know that the human elbow does not bend the other way. These problems are basically solved using force feedback (haptics), inverse kinematics (telling the robot how a human moves) and proper sensors but everything needs to be super-tuned if we are actually wearing it.

The XOS had 30 actuators to control its 30 different joints and used  hydraulic cylinders (like our car brakes) to power them. Feedback was based on keeping a number of contact points with the joints and mimic whatever the user does (improved force sensors), like when we try to teach a kid to dance by balancing their feet on our own. A computer was in charge of translating the user input into motory output.



Later, Raytheon bought Sarcos and produced the XOS 2, a suit powered by an internal combustion hydraulics engine with electrical systems that uses lighter material and is about 50% more energy efficient than the XOS 1. It has processors on every joint and its actuators are reported to deliver about 200kg of force per square centimetre using pressurised hydraulics.

There are two other major potential investors in this field: space and health.



NASA, for example, believes that exoskeletons might be just the thing to keep a fit muscular tone in space, where things don’t weight anything and one hardly exercises at all. In this sense, their 57-pound  X1 robotic exoskeleton is meant to inhibit movement in leg joints, although it can also assist it in case some angry, giant spider-bug crawls into the space station (or we want to use it for assistance on Earth, which is more boring, but also way more likely).  X1 has 10 degrees of freedom, or joints 4 at hips and knees and six passive ones for sidestepping, turning and pointing, and flexing a foot)

If someone fancies an Iron Man suit instead, Trek Aerospace is developing the Springtail Exoskeleton Flying Vehicle, an exoskeleton frame with a jetpack expected to fly up to 70 miles per hour (112.6 kilometers per hour) and hovering motionlessly thousands of feet above the ground as well.

The most down-to-earth application of exoskeletons is, however, assisted motion. If your limbs or back  won’t support you, these thingies could free you from the wheelchair and preserve your autonomy (at least as long as the batteries last).

Cyberdine -unrelated to the infamous company that created evil Skynet- is a japanese company that commercializes the Hybrid Assistive Limb, or HAL (also unrelated to the also evil HAL computer in 2001, A Space Odyssey; these guys have a gift with names). HAL is a power-assisted pair of legs, and the company also developed similar robot arms.



Unlike the previous designs, HAL is meant to help the elderly with mobility or help hospital or nursing carers to lift patients (nowadays you need a fairly bulky mobile minicrane to do this if the patient can’t help at all). HAL is operating at 150 hospitals nowadays and suits are leased at $1,950 per unit per year approximately.

However, the most novel feature in HAL is not its purpose, but its user interface: Cyberdine claims that HAL can be operated with the brain. In fact, according to their website, they do not detect EEG signals, but bio-electrical signals that appear in the limbs when the brain decides it’s time to move them (actually, this approach is closer to electromyography, but without the needles and the pain).  Given a large enough number of sensors to detect the appropriate number of currents, each combination of readings is associated to a different motion, that is fed to the actuators (e.g. if all fours sensors at hip,  shoulder, elbow and wrist are in motion, maybe you’re throwing a punch, if the hip sensor is off, maybe you’re simply trying to reach something). Since it is highly unlikely that sensors work correctly all the time, incomplete information is filled by onboard computers, in a similar way that isolated words can be concatenated to form a meaningful sentence.

Anyone seen the Prestige? For the few of you that say yes, you might recall Hugh Jackman’s meeting with Tesla in a ghostly looking town where they planted lighbulbs on the ground and shared electricity for free. Believe it or not, totally true.

Many of you have probably watched as a kid the popular experiment where someone holds a lightbulb in the hand and steps close to a sparkling Tesla coil to have it instantly alight, no wires attached. The first person to play that trick was, obviously, Nicola Tesla himself.

Tesla’s ultimate goal to this respect was free distribution of electric energy on the air. Just imagine that your mobile phone goes dry of juice and you just need to hold it on thin air to load it back. Cool, right? Of course, this free-for-all way of thinking did not sit well when he moved from Croatia to US, so we still need to plug our equipment and pay accordingly to our electricity provider. However, he has fairly successful in proving it was possible.

The key idea to energy wireless transmission was electrical resonance, which is nicely explained here. The basics are fairly simple: if you put together an inductor and a capacitor (LC circuit), the magnetic field in the inductor generates electric currents that charge the capacitor, whereas the discharge of the capacitor produces electric currents that generate a magnetic field in the inductor and … you get the drill. This works like your conventional pendulus: in absence of other forces, it could go on forever (in practice, though, what we get is actually LRC circuits, where the R(esistor) stands for the unavoidable electrical losses in the circuit). Resonance happens when capacitors and inductors get along optimally, i.e. their transfer function is close to 1, meaning they give their maximum to each other. This is similar to pushing a kid in a swing: if you push in the right moment, speed and height increase significantly (and the kid will complain way less). 

A Tesla coil is a circuit like the one below. The left part of the circuit, including the high voltage transformer, is simply meant to charge the high voltage capacitor on the right until the voltage on both sides of the spark gap is enough to ionize air -turning it into a neat conductor- and close the right circuit, that charges the secondary coil and produces the expected current (resonator 1). At the same time, the Torus on top of the secondary coil behaves like a capacitor with respect to the ground and, hence, becomes resonator 2 with the coil as well. Now that we have two resonators at the same frequency, and resonator 1 gives the required push to resonator 2 in the right time, like in the swing example, increasing its juice until breakout happens. The process is fully explained here in detail and it is obviously the basis for the Tesla Guns in Warehouse 13.


Electrical resonance for wireless power transmission works pretty well, but only in close distances, because transmission of power through air obviously results in a significant loss of energy. In order to solve losses in the air, Tesla decided to send energy through the ground instead. This was a long shot because we all know that, in fact, the ground is were currents go to die and all, but he figured out that if he charged the ground enough (which is quite a lot), it would magically become a conductor. A conductor connected with virtually everything, too. At this point, since they wouldn’t let him test his stuff on New York City (with some criteria, in fact), Tesla moved to Colorado Spring, where the local power company even provided juice for him at no cost. At least, until he burnt down the installations, that is. He built there a Frankenstein-like lab, toppled with a 180-foot metal tower, his “magnifying transformer”. And here is where he planted light bulbs on the ground within 100 feet of the tower and lit them wirelessly.


However, Tesla wanted to go world wide, so he decided to pump 10.000.000 volts into the earth surface to see if he could make them reach the other side of the planet and come back. To prevent losses, he sent power as a series of pulses reinforcing each other, like waves in the sea. Plus bounced back energy added to the peaks too! All in all, the experiment resulted in an amazing 130 feet long arc of lightning, along with thunders and all, plus the destruction of Colorado Springs power generator.

Needless to say, that marked the end of Tesla’s experiments with wireless power transmission. With the american grid, at least. However, Power Cast has recently announced its line of products to wirelessly load low power electronic devices (MICROWATTS to LOW MILLIWATTS). Without the arc of lightning, we expect 🙂