Building a real-time 2D laser range finder

Robotics Technology
Building a real-time 2D laser range finder
real_time_laser_range_finder.jpg
real_time_laser_range_finder_diagram.jpg

Here’s an older project that I found pretty inspiring. Way back in 2001, Kenneth Maxon of the Seattle Robotics Society built his own Real-time Laser Range Finding Vision System using naught but a laser pointer, NTSC video camera, a bit of custom electronics, and a Complex Programmable Logic Device (CPLD). Vision systems usually seem to be really complicated to understand, but the crux of this one is relatively simple.

First, a laser line, parallel to the ground, is projected onto the objects to be detected. The camera is mounted above the laser, pointing downwards at the objects on an angle. Next, the logic device continuously scans each line of video data, recording the point in the line where the highest intensity of red was detected. This point corresponds to the distance of the nearest object in that direction, but needs to be corrected using a look-up table. Once each row of data is scanned, the vision system has a list of the distance to the first object along the entire field of view from the camera. This data can then be used to determine which direction to move a robot, or to map out the current environment. Cool!

Think it could be done on an 8-bit microcontroller? [thanks, Marty!]

10 thoughts on “Building a real-time 2D laser range finder

  1. Carnes says:

    Sure, it might be slow though, and it might depend on how much image data to sort through. I’ll try a demo after work. You could just setup an array like [400][300][3]=X,Y,RGB. Fill it in with normalized random data and then a star pattern high red with low G&B. Run through the image looking for the center of the star pattern. The higher the red star is on the image.. the further the object.

    If it works, it’s not much further to feed it actual images. But i think that will be the real bottleneck, there isn’t enough onboard RAM to even hold the above array, you’ll need 360kB of space. The largest image you can hold in ram is about 32×20 RGB pixels. But luckily we only need a sliver of the whole image, the middle section top to bottom. 5×128 resolution would work just fine, as long as it can still pick out the section that is most red.

    I guess you can also read into a small buffer, discarding all but 5 lines and do the searching as the data arrives. But the camera device will have to be tolerant of this.

    Please, reality check me.

    1. Matt Mets says:

      Oh! I think I was totally misleading in my description of the project. The clever part about his algorithm is that he used threshold detection to mark the first spot that the camera saw a high enough value of red, and just stored that value straight away. Using an external IC to separate out the H and V sync signals, I think it’s just a matter of counting how long the delay is between the vertical sync and the first incidence of the threshold detector. That way the IC doesn’t need to do any /real/ image processing, it just needs to collect the information from the analog detector. If you assume that the laser has a much higher intensity than any of the ambient lights, you should be able to do it with just a 1d array of distances.

      1. Chris Kern says:

        I actually built exactly what you’re describing a few years ago. It consisted of an LM1881 sync separator and an atmega168. I recall that it worked really well on contrived targets (e.g. a cardboard box in flat light) but tended to get confused by real-world situations with weird lighting and shiny targets. It’s definitely a workable idea, though, and there are a lot of improvements that can be made to the general concept — for example flipping the laser on and off based on the television field bit to adaptively set the threshold.

        1. Matt Mets says:

          Cool, any chance you have some documentation on your project? The real-world variability is unfortunate, but makes sense. I like the idea of doing thresholding in opposite fields. Perhaps one could also add some analog filtering on the input to look for light edges rather than just intensity? Now you’ve got me thinking…

      2. Carnes says:

        oooh, neat. I actually read the article this time, hah.

        Yeah, i’m way off. I was thinking it was a point laser. The band filter seems like a great idea too.

        I think it would be difficult to time the video feed for arduino consumption. A serial camera would work if you can pull pictures on demand. I honestly don’t know how to convert an analog NTSC signal into something the arduino can digest (i am a programmer) which is the best part of his project. Could a comparator be used for threshhold detector? The arduino or pentometer can be used to set the threshhold. But timing sync frames? ffft, i’m lost. I would shove it all into the arduino and work it out in software.. which is probably not the best way : )

        1. Matt Mets says:

          Oh, there’s an IC that can (more or less) do the separation for you:

          http://www.national.com/mpf/LM/LM1881.html

          From what I can tell, it will separate out the important bits and give you a nice digital signal saying ‘new frame of data. first line of video data. next line of video data’. Then, using the comparator circuit (from the article) to generate a ‘red threshold exceeded! red threshold exceeded!’ signal, you should be able to pull them all together in software to build a simple system. There are some complications (blocking out the first amount of data from each scan, because it doesn’t contain video data), but that shouldn’t be /too/ difficult to pull off in software.

          1. Chris says:

            I was working on a similar system and that chip will save me a lot of time! Though I had a slightly different approach, two laser pointers directed at a fixed point and a grayscale camera with a narrow band pass (640-680nm) filter on it. With the distance between the two dots we’d be able to calculate the distance away from an object. Or you could also calculate it off the camera’s central focal axis.

            Getting back to the original topic. We could get away without even using an array, we could just use three integers, an X (or col) value, a Y (or row) value, and a Max value. When the data is coming in from the camera it’s just a continuous stream of values that the 1881 will help us parse. At the beginning of each frame we set the Max value to zero then if the next value is larger, store it and the location and continue. When a new frame signal is encountered we know we have our location in camera coordinates.

  2. nike pas cher says:

    We transported my very own SSH key to Diet Coda using the clipboard. We used PasteBot coming from Tapbots to grab the important thing outside the clip-board in the Mac pc along with the idea within the clip-board in the ipad device.
    nike pas cher http://www.pclf-epi-54.fr/Files/nike-free/

  3. air max pas cher says:

    – In the “about” food list, this affirms our model is actually some. second . 6 (8E200) if that will help.
    air max pas cher http://www.agora-asso.com/content/nike-tn-requin/

  4. PhyllisAMason says:

    …………….Feel Freedom makezine…….

    ……………….. Find Get More </b

Comments are closed.

Discuss this article with the rest of the community on our Discord server!

ADVERTISEMENT

Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).

FEEDBACK