Archive for March, 2013

Let’s acknowledge it: we’ve all wanted an invisibility cloak at some point in our lives. And, since we can’t steal it from Harry Potter, we have to get imaginative, right?

In fact, Harry’s cloak is actually easy to build and it is based on a long time working engineering trick called chroma, that everyone has seen not only in the movies, but also on weather news every day. This technique is also used to create fantasy and scifi backgrounds that was obviously never there, like basically everything in Once Upon a Time or the whole 300 movie. You just need to film everything with a green background and replace anything green with whatever you want.


But back to invisibility cloaks … the trick works like this: we get a cloth (or surface) colored in something ghastly enough that you won’t find in anything in nature, like the usual radioactive green. Now, you get yourself a video camera and get a shot of the background, minus people, so the camera actually knows what’s behind any person walking in the field of view. After that, if someone covers in the green cloth and walks in front of the camera, you can tell a computer to simply remove any green pixel in the image and replace it with whatever color the pixel in the same position was in our background image. Voila! anything under the cloth is removed and replaced with the background. Bad news, though, is we are only invisible through the camera: to anyone looking at us in plain view we would look like Green Riding Hood.

If what we would like to have, in fact, is for everyone to see our invisible garment, we need something like the University of Tokyo invisibility cloak (or, rather, raincoat).


In this case, the previous idea is combined with Augmented Reality techniques. What we do to actually achieve the desired effect is to project whatever is behind the “invisible” guy on top of the garment, which includes a camera on its back to capture what we are occluding in real time. The key to a good projection is to use a retro-reflective material in our cloak, capable of bouncing back the light rays exactly in the same direction from which they came so that it is bright enough to see outside in bright daylight (think about the difference between projecting something on your wall or on a cinema screen with the lights on). In the Tokyo prototype, this is achieved via a beaded surface.

Image taken from How Stuff Work

Needless to say, the whole setup for a camouflage effect that only works from a fixed direction is quite bulky. Nice enough for something like the Avengers helicarrier, though 🙂

avengers helicarrier

Actually, the closest we have to invisibility gadgets are metamaterials, because they can actually bend electromagnetic radiation. Assuming that we actually see an object because light reflected by its surface reaches our eyes, it’s immediate to notice that if we place a second object in the middle of the reflected light path, some beams are blocked and replaced with the second object reflected beams instead. Thus, the image we perceive on the other side is a composition of the original object and the second one. If the second object is reflective, light gets bounced up and we get nice effects like nifty rainbows on the wall and stuff, but the fact is that blocked light doesn’t get to us, so we can’t perceive the original image as it was.


Metamaterials can actually do the trick for us. We can imagine that they divert the light, like a mirror, but they also have the skill to bring it back to its original position after the second object is left behind. Thus, when it reaches our eye, the original picture looks like it should be: as if nothing had been on the way. Our metamaterial object has rendered invisible to all effects.

Unfortunately, metamaterials to make invisibility cloaks need to be made of a lattice with the spacing between elements less than the wavelength of the light we wish to bend. Since we can’t physically make the lattice as small as we would like, thus far this has only worked for long-wavelength radiation such as microwaves. With a cape like this, we would be invisible to some wave detectors (think of good old invisible planes) but not to visual inspection.

In 2007, the University of Maryland’s Igor Smolyaninov managed to produce a metamaterial capable of bending visible light around an object, but quite small and heavy and limited to 2 dimensions. Some universities have also reported new structures that could work in the visible light spectrum and even in 3D (i.e. the cloaked object would be invisible from any direction), but, given the current limitations in materials, it’s not likely we are buying any invisible coat in Zara anytime soon.

One of the things I recall most back from when I watched Jurassic Park is that the explanation about how they brought the dinosaurs back actually sounded plausible (which is, in my opinion, the difference between good and bad scifi). Indeed, it was so plausible that apparently someone took it seriously enough to try.

There’s this “Lazarus project” in the australian University of New South Wales where scientists have reported to bring back from extinction a weird gastric-brooding frog that was a goner since 1983:


Apparently, Adelaide frog researcher Mike Tyler froze some specimens before they vanished that have been used to bring one back using the same cloning technology that science applies to still-living animals. Basically, they took eggs from a distant cousin living frog (great barred frog), deactivated the original DNA with UV light, and inserted the extinct frog’s DNA into the eggs. Don’t get your hopes too high! The eggs became embryos, but they died shortly later, not before it was confirmed that they were actually gastric-brooding frog embryos and not great barred frog ones. Scientist claim that they expect to bring the frog back soon.

It needs to be noted that cloning of extinct animals started with the Pyrenean ibex, but DNA from the species was actually extracted from a living one before they went extinct, rather than from frozen samples. Too bad the clone ibex died shortly after its birth. If there are still valid samples, maybe they can have another try with these too!


The next big question would be: how many sources of valid DNA can we find out there?

Source IO9

virtual-reality-6 (1)Everybody knows about Virtual Reality (VR) nowadays, all of us having played this videogame or other. Virtual Reality, when compared to traditional representation techniques, allowed mapping of the geometry of a 3D body into a non-existent 3D space, meaning that we could capture –in this case, render– any view from any point of view at wish, whereas in traditional 2D games we were bound to watch the game as the designer had originally planned. Looking for a better example, when we watch TV we are forced to see the film from the director/camera perspective. VR-based TV would allow us to change the point of view at will, not because there are more cameras, but because we know the 3D shape of all objects in the field of view and, hence, we can project them into the screen plane by simple geometry transformations. Like, for example, moving the camera around the main character to see if the monster is already lurking instead of waiting for the traditional scare. Indeed, Johnny Chung Lee did something like this using a Wiimote to track your position with respect to the TV and change the projected image accordingly:

Cool as it sounds, VR promised more than it delivered, and was limited to a few applications, mostly to progressively more visually complex videogames, like the ones in Xbox or PS3 today. Maybe because they are so close to the general public, VR applications are not fascinating anymore. However, Augmented Reality might do the trick.


Unlike Virtual Reality, people in Augmented Reality environments are not in different, faraway places, but exactly where they are. Sounds boring, huh? The key to AR is that some of the things you see in fact, aren’t: they are computer-generated and overimposed to your vision. For example, the T800 views in Terminator gave information about who the robot was watching to properly shoot them in the face. Similarly, Predators also had their own googles. Or the screen where Tom Cruise messed with present and future in Minority Report was indeed not there, floating in the air. And I’d say that Gollum was not there with Frodo and Sam, but then, who knows :P.


AR just requires a camera, a PC, or any other processing device, and some sort of viewing device, like screen, googles. mobile or whatever. The camera sees objects in the real world, the PC calculates their position and generates extra information and then the viewing device presents the real view with just that little extra. Probably, the best known AR application, though, is the Weather-person, who is waving around nothing while the computer overposes a map with suns and clouds in thin air in real time. The trick here is that, in fact, the person is standing in front of a flat surface colored in something you would not be caught dead wearing, like radioactive green. The camera knows that color well, so whenever it finds in in one pixel, it replaces it with its equivalent in a computer generated image. In the end, only the computer generated image and non-green pixels (i.e. the person) remain. This process is known as (static) background subtraction.  Indeed, Playstation Eyetoy works in a similar way, only instead of assuming the background is homogeneously colored, it assumes the only thing moving in the field of view is the game player (motion based background subtraction).

AR techniques are based on knowing where things are. The key idea is to align both virtual and real world, so that virtual information keeps tied to real objects. In VR applications, if we move ahead, the whole world moves with us. In AR things tend to remain where they are: if there is a key on top of our dining room table, a virtual label may be overlapped to it, specifying that it is our store room key. However, if we step ahead and leave the key behind, the label stays on top of it.


If we want to include more complex, 3D objects, like a Gollum to guide us to some place, the align problem is a bit harder, so that it seems to be within the real world. Basically, we take some object whose side and position we know and use it like an anchor. For example, a black square might do the trick, although it is possible to use anything in the room if our computer is smart enough and we have enough computation power. Distortion due to perspective, like small means far, large means close and such, allows positioning of the person’s point of view (POW) virtual empty VR world. We can model our object there with typical VR techniques and then, render the body from the calculated POW. The resulting view is combined with the real one to create an augmented frame. If the process is fast enough, the virtual object changes shape according to our position in the real world.


What do we want AR for, except FX and computer gaming? I guess you have not seen yet Iron Man (the first), go to the watching facility closer to your convenience and take a peek 🙂


Want to know more … ?