One of the features of the Raspberry Pi Compute Module Development Kit is that the IO board has two camera serial interface connectors. This means that you can connect two of the official (and popular) Raspberry Pi Camera Modules to the board. Argon Design intern David Barker used this enhancement to create a camera capable of depth perception, which requires two separate images and a stereo depth algorithm. Here’s how he explains it:
The algorithm we used is a variant of one which is widely used in video compression. The basic idea is to divide each frame into small blocks and to find the best match with blocks from other frames – this tells us how far the block has moved between the two images. The video version is designed to detect motion, so it tries to match against the previous few frames. Meanwhile, the depth perception version tries to match the left and right camera images against each other, allowing it to measure the parallax between the two images.
David started with some Python code to try out, but translated it into C to speed up the image processing by a factor of over 1,000. He then translated it once again to Assembly for the Raspberry Pi’s GPU… no doubt an impressive feat! Now the process of getting the images from the cameras, processing them to get the depth data, and displaying the results on screen takes around 90 milliseconds, which translates to a respectable 12 frames per second.
For more details on how he accomplished this, check out his case study. [via The Raspberry Pi Blog]
ADVERTISEMENT