Interesting paper from Neel Joshi, Sing Bing Kang, C. Lawrence Zitnick, Richard Szeliski at Microsoft Research, describing how they mounted 3 gyroscopes and a 3-axis accelerometer on a DSLR to record the camera’s motion while a picture is being taken, and used that data to automatically deblur the resulting image at the software level. From their abstract:

We present a deblurring algorithm that uses a hardware attachment coupled with a natural image prior to deblur images from consumer cameras. Our approach uses a combination of inexpensive gyroscopes and accelerometers in an energy optimization framework to estimate a blur function from the camera’s acceleration and angular velocity during an exposure. We solve for the camera motion at a high sampling rate during an exposure and infer the latent image using a joint optimization. Our method is completely automatic, handles per-pixel, spatially-varying blur, and out-performs the current leading image-based methods.

Their prototype is built on an Arduino. [via adafruit]


Sean Michael Ragan

Sean Michael Ragan

I am descended from 5,000 generations of tool-using primates. Also, I went to college and stuff. I am a long-time contributor to MAKE magazine and My work has also appeared in ReadyMade, c’t – Magazin für Computertechnik, and The Wall Street Journal.

  • Josh

    They could probably use some of that blur information to infer depth.