top of page
Search
Kat Allen

Where Am I and Who Hid The Map?

SLAM, Sensor Fusion, Big Data and Lost Robots


"Everyone" knows that, if you want to get around, GPS(1) is how you know where you are, where you are going, and how to get there. It's actually much more complicated, since GNSS systems like GPS *only* tells you where you are, right now, in relation to a set of orbiting satellites with very well-characterized orbits. As long as you can see enough of those satellites, and have the orbital data, you can know where you are ... more or less. (GPS data is typically accurate to within 2-4 meters under open sky)


But what if you are *not* under open sky, like if you're driving in a city? Or you're a robot in a building?What if you need accuracy better than 4 meters? (4 meters can be a pretty big error - that's larger than the width of a pedestrian bridge, wider than one lane on a road, etc).



Robot on a street, looking at a map, with signals coming in from candy-shaped satellites in the sky.  The road the robot is on leads to a broken bridge with a "bridge out" sign.  The correct bridge is behind the robot, on a parallel path a few meters away.
The GPS says we're on the road to the bridge...


This has come up a lot recently, both at the large scale (a fascinating recent talk by Zak Kassas of the CARMEN center about sensor fusion techniques for autonomous vehicles when GPS signals are lost) and at the small scale (getting an EV3-based robot to navigate around my kitchen). Sometimes, GNSS signals are not enough, or not available, and we need more data.


Sensor Fusion

Sensor Fusion is the answer, and is exactly what it sounds like: taking multiple sensor readings and combining them to correct for drift and error rates.(2)


All the plusses mean Fusion must be good!

Sounds great, right? Just squish all the sensors together in a giant sensor group-hug, and we're good to go.


Scene from Disney's Aladdin, with all the  protagonist characters in a group hug initiated by the Genie
Is this sensor fusion?

Except that the "sensor group hug" is a little harder than that. Each sensor has its own data rate, is mounted on a different place on the robot/vehicle/whatever, and the data comes in in a different format.

As an example, lets take just the bonus content from a recent robotics project of mine: a comparison of the Inertial Measurement Unit (IMU) and a LIDAR chip to measure the angle of the robot with respect to our desired (vertical) angular position:


Even in a *very* simple system, with a very simple measurement goal, being moved by a relatively steady human hand (so the best control system we have!) the complications add up quickly:

  • The LIDAR is mounted slightly angled downwards from horizontal

  • The IMU drifts over time (even in a relatively short test - this data was from about 30 seconds of motion)

  • The IMU has errors when higher-order derivatives of position (acceleration, jerk, etc) are present. (This would be especially problematic in the system as it operated, since the motor-driven control system was *not* very smooth in operation!)


Ok, so sensor fusion is going to take some work. But it's very achievable work, we can sort out the limitations of each system and clever engineers can combine them. Cool! Now we know where we are!


... but where is everything else? We need a map.


The Map

If we want to get somewhere, we need a map. Maybe, we are very lucky and someone has made one for us! Big Data FTW: companies like Ecopia AI are using satellite, aircraft, and streetview data to make high-resolution maps with lots of things identified automatically:


A screenshot from the Ecopia demo, which I think is Barcelona

This is great, I can use all my labeled data here to make maps! (Computer science is really really good at mapping between places along paths with known costs)


But what if nobody has made a map with the locations I need to know about? What if I am working at human-scale, rather than civilization scale, in a dynamic environment like a search-and-rescue operation?


Now I need SLAM! Wait, what is SLAM?


Is it a really big door? Nope, it's an acronym...


Simultaneous Localization and Mapping


SLAM is a method for getting both creating a map and finding your position on that map, at the same time. Unsurprisingly, SLAM also uses sensor fusion, using data from positional sensors like LIDAR and IMU, visual data from cameras, and mechanical sensors like motor rotation counters, to make a map with a scale that makes sense to the vehicle/robot. The people at MATLAB have a great article on the basics of SLAM and how it works with LIDAR point clouds or visual camera data. There are some challenges (its important to be able to tell when you've returned to the same place when you're making a map, and surprisingly difficult!) but it is a powerful tool for navigation.


So, there we go: with lots of sensors, carefully integrated into a complex picture of our surroundings, we *might* be able to get our robot down the block for a cup of coffee. (But hopefully you built a cupholder into your robot, since robots are also bad at holding things -- perhaps the challenge of gripping things will be a future blog post!)


Footnotes

(1) Or, more generally, GNSS - the Global Navigation Satellite Systems, which include GPS and several other systems -- you can read a great article about the various GNSS systems and their limitations and capabilities here


(2) Thank you to Aptiv for the excellent graphic!


10 views0 comments

Recent Posts

See All

Comments


bottom of page