I came across this video today of a iRobot create robot autonomously navigating a hallway using a standard laptop webcam. The guidance software centers the robot on the perceived vanishing point, which can be picked up from visual cues like wall and floor lines, allowing the robot to cruise in a direct path down the center of the hall.
For implementing higher level network task of maintaining connectivity in the deployed sensor network, the mobile node (iRobot Create) must navigate through the indoor environment. Using only the laptop camera as a sensor we implemented the navigation and localization system.
Navigation uses vanishing point extracted from the corridor walls and edges. Robust Navigation is credit due to Pratap Tokekar. I worked on localization with fiducial (cones) and helped the robot estimate its position by measuring range from the cone-landmarks.
More from Pratap’s blog:
I was responsible for vision-based navigation of the robot within the hallways. I used the vanishing points from the parallel lines present indoors to compute the robot heading. This was then fed into a controller to control the direction of the robot for navigation. The computation was made robust to change in light conditions, false detections, occlusions by a layered filtering approach that included RANSAC and least squares filtering among others.
I’m impressed with how well this works, and it seems like it wouldn’t be too painful to implement for a special-case hallway rover like this. Pratap’s project report has a lot of details about how the vanishing point detection system works. Check it out if you’d like to implement something like this yourself.
If anyone has other examples of vanishing point guidance, please leave a comment.
Vanishing Points Based Navigation – Project Report Details (PDF)
Pratap Tokekar: Vision Based Navigation
Video: Vision based Navigation and Localization
ADVERTISEMENT