When I first read about Douglas McDonald’s Scribbler Bot (MAKE, Volume 07, page 141), it was love at first sight. I simply had to make a drawing robot.
Doug’s original Scribbler Bot converted webcam photos into distinctive line drawings, then used a homemade plotter (with a pen or pencil zip-tied on) to render them onto poster-size paper. I knew from his article that to put something like this together myself, I needed to get some stepper motors and boss them around with some software. Luckily, I got a lot of the hardware issues out of the way by finding a Japanese medical contraption that had a former life organizing vials of blood. It was a perfect XYZ platform for my drawbot!
I quickly realized that I couldn’t do this project on my own. The hardware required reverse-engineering, and the software had to be coded. My friend 3ric held a robot-making get-together at Seattle’s Public N3rd Area, and friends were recruited to help. Fueled by undocumented quantities of pizza and Mountain Dew, contributors 3ric, Adam, Melvin, Brian, Divide, John, and Choong brought their ninja-level hardware-hacking and software-writing talents to the project.
On the hardware side, we hooked up the steppers and the limit switches to the MAKE Controller, and we put together DB9-connected serial cables with different-colored wires so they would be easy to follow if there was a problem. I found that when running lots of wires, it helps to twist them all up into a cable with a drill, and when attaching them to things, zip ties are your friend.
Throughout the build, it was important to keep a notebook with all of our diagrams and notes. The stepper motors required more power than the MAKE Controller could put out, so I ordered some Interinar microstepping motor controllers that could be adjusted to output the power the steppers needed.
Holding the paper down turned out to be somewhat tricky — we needed a separate base and springs to hold it stable. We added legs to the contraption, and John Blunt, our woodworking neighbor, made a beautiful oak base with clipboard clips to keep the drawing paper secure.
The drawbot process starts with a photo taken by my MacBook Pro’s iSight camera. Any image would work, but using the iSight removes the inconvenient step of importing photos to the computer. Then you save the image as a .bmp file, and drop the file into our Launch Drawbot program. Launch Drawbot converts the color image into a simple black-and-white bitmap using Peter Selinger’s mkbitmap utility, and then converts the resulting bitmap into a vector graphic representation using Selinger’s Potrace. Mkbitmap and Potrace are both open source, available on sourceforge.net.
Launch Drawbot shows you a preview of the drawing before you start, so you can get an idea of how it will work. You can also adjust the size of the dark areas, where the contrast edges are drawn, and how thick the fill lines are. The better the image going in, the better the drawing coming out will be, and we discovered that filtering the image before generating the vectors is critical to reducing the line count, which reduces drawing time. We didn’t want to wait 8 hours for our pictures.
Once the actual drawing starts, the program sends packets of data over Ethernet to tell the drawbot which coordinates to go to. As soon as you command the drawbot to begin, it puts the pen down on the paper and starts drawing. It draws an outline of all the areas first, and then goes back and fills in the shading.
Everybody who worked on the drawbot agreed that no matter how much you suffer from OCD, it’s spellbinding to watch and can maintain your attention for hours on end. Feel free to download the code for the project, play with it, and make it better. It’s under the GPL license, which means you’re free to use it as long as you release your changes under the same.
For more, go to makezine.com/11/drawbot.Related