Mapping 101 for explorers

Posted: February 20, 2013 in This is how it works
Tags: , , , ,

One of my (few) favorite moments in Ridley Scott’s Prometheus is the spaceship mapping using fancy flying spheres. Currently, it is feasible to have such devices (and it would be super-cool if they were flying spheres equipped with light beams, too). Drones (small autonomous flying machines) have been in use since the big war and, at this point, one can acquire a Parrot for a reasonable price. Sensors could be a problem at some point, but Kinect-like devices are actually close to what the movie shows.

Let’s start slow: the process of mapping an unknown environment as they do in Prometheus is called SLAM (Simultaneous Localization and Mapping) and, depending on available sensors, is quite solved (http://openslam.org/). If a person has to make a map of a corridor environment, one would probably start by counting long steps -let’s say approximately one meter each- to measure each corridor and then try to recall any bifurcation in it. If we know a bit about mazes, we’d probably use the rule of the right hand, meaning that we would turn into every new corridor on the right and, eventually, we would be out of the place with a (partial) map of the maze under one arm, too!. The main problem of this approach is, obviously, that our perception of distance is only approximate, so we are accumulating errors. In a straight corridor, it is not so important unless we want to match maps acquired by different people, but if there’s turning involved, errors grow faster and render the map useless after a short while. The figure on the left shows how a map would look to the casual observer when the data gathering platform does not correct localization errors, whereas the one on the right shows what happens when these errors are corrected.

withoutSLAMwith

If we could assume that all corridors are perfectly orthogonal, this would not happen, because we are certain that every time we start in a new corridor, it is exactly 90 degrees from the previous one. However, in an unknown environment no assumptions can be made.

In order to solve this problem, one could try to match what we have perceived thus far with what we are perceiving at the moment. If we keep correlating what we perceive with what we have stored in our memory, we are actually performing SLAM. In most cases, methods follow a statistical approach more or less derived from classic Bayes theory: the probability of observing structure A after moving some space is equal to the probability of having observed structure B over the probability of B actually being there. To take the simplest possible example, in a corridor where doors are separated exactly one meter, if we observe a door after moving ahead and we had observed a door just before, we have moved ahead exactly 1 meter. This process is explained in the figure:

tracking

Of course, things are not that easy if we do not know a priori about the spaced doors, as is the case, but we can still manage. For example, it is impossible to perform a 180 degrees turn and leave behind a straight corridor segment to find one forming 40 degrees with our heading if there are no corners. The most feasible explanation is that we have turned too much or too little. The most usual approach is to extract features of the environment from different locations and compare where they are with respect to us in both locations: e.g. if we see a window by our side in one location and it is just 2 meters behind us in another, we have moved exactly 2 meters … if we can take for granted that “exactly” that is.

There are many well known techniques that solve SLAM in robotics, mainly Montecarlo based solutions, and a few of them are available for open OS like ROS. However, all of them are based in more or less complex maths variations of the same basic idea (above).

Now, the key is to have the sensors necessary to acquire a 3D model of the environment on the run. In the movie, the spheres are apparently using laser beams, which are perfect to estimate the distance from the emitter to a point, but to achieve this we would need a delicate motor system to move those lasers around, since each laser reading only provides information about how far the projection of that laser on the closest surface is from the camera that captures its light. Obviously, we would need a laser capable of rotating around the flying sphere at full speed again and again and again, like a PET device. Corridors would be sliced into planes around the sphere and the surface of those corridors would be rebuilt from the readings of the laser in the 360 degrees around it at each plane. This system could be built, but it would be expensive and heavy, so a small sphere is not likely to have it equipped any time soon.

Nevertheless, once we know exactly where we are, it is not that hard to actually build a 3D model of the environment, as long as we can just detect the distance to nearby obstacles and map (captured) textures on top of planes. Not as fast as in Prometheus, though … yet.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s