Making a Laser-cut Zoetrope with Processing and Kinect

3D Printing & Imaging Arduino Craft & Design

This codebox shows you how to create a physical zoetrope using Processing, a Kinect, and a laser cutter. The sketch will load in a movie recorded from the Kinect’s depth camera, use the Processing OpenCV library to turn that movie into a series of outlines, and save the outlines out in vector form as a DXF file that can be sent to a laser cutter. I’ll also explain the design of the viewer mechanism that gives the zoetrope its spin.

About Zoetropes

Before jumping into the code, a bit of cultural history and inspiration. The zoetrope was a popular Victorian optical toy that produced a form of animation out of a paper circle. On the circle, zoetrope makers would print a series of frames from an animation. Then the circle would be surrounded by an opaque disc with a series of slits cut out of it. When the paper circle spun, the viewer would look at it through the slits and see an animation. The slits acted like a movie projector, letting the viewer only see one frame at a time in rapid succession, resulting in the illusion of movement.

Recently, people have begun to figure out how to achieve the same illusion with three dimensional objects. The artist Gregory Barsamian builds sculptures that spin around in front of strobe lights in order to produce the illusion of motion. The sculptures consist of a series of different objects at different stages of motion and the strobes act like the slits in the zoetrope to create the illusion of motion (Barsamian may be familiar to some Make fans from our earlier coverage: Gregory Barsamian’s Persistence Of Vision).

Pixar recently picked up the trick to create a physical zoetrope for their lobby. Animators there were convinced that the physical zoetrope is an unparalleled demonstration of the principle of animation: the transformation of a series of still images into moving ones:

So, what’s the recipe for a physical zoetrope? We need a series of images that represent consecutive stages in a motion. Then we need to transform these into distinct physical objects. Once we’ve got these, we need a mechanism that can spin them around. And, last but not least, we need a strobe light to “freeze” each object into one frame of an animation.

How can we do this ourselves? Well, to get that series of objects we’re going to extract silhouettes from a piece of video generated from the Kinect. We’re then going to turn those silhouettes into a vector file that we can use to control a laser cutter to cut out series of acrylic objects in the shape of each frame of our animation.

Let’s dive in.

Recording the Depth Movie

The first thing to do is to download Dan Shiffman’s Kinect library for Processing and put it into your Processing libraries folder. If you’re not familiar with how to do that, Dan’s got great clear instructions on the Kinect library page.

We’re going to use this library to record a depth movie off of the Kinect. (In theory, you might be able to also use a conventional camera and a well-lit room, but what fun would that be?) Thankfully, recording your own depth movie is only a few lines of code away from the Kinect example that ships with the Processing library:

Discussion

Let’s talk through how this works. First, we include the Kinect library and the Processing video library; we’ll need that later in order to record a movie. Then, we declare Kinect and MovieMaker objects. The MovieMaker is the object that’s going to do the work of recording the output of our sketch into a movie file.

In setup, we set the frame rate to 24 so that it will match the movie we record. We also configure the sketch to be 640 by 480 to match the size of the video image that’s going to come in from the Kinect. We do some basic Kinect setup: tell our kinect object to start reading data from the device and enable the depth image. Then we initialize the MovieMaker class, giving it a quality setting, a filetype, and a filename. You can read more about how MovieMaker works in the Processing documentation. It’s important that the frame rate we pass to MovieMaker matches that of the sketch so that our movie plays back at the right speed.

Our draw function is incredibly simple. All we do is call kinect.getDepthImage() and draw the output of that to our sketch using Processing image() function. That will show us the grayscale image representing the depth map the Kinect is extracting from the scene. This will be a black and white image where the color of gray of each pixel corresponds not to the color of light of the object but to how far away it was from the Kinect. Closer objects will have lighter pixels and farther away objects will be darker. Later, we’ll be able to process these pixels in order to pick out objects at a particular depth for our silhouette.

Now that we’ve drawn the depth image on the screen, all that we have to do is capture the result into a new frame of the movie we’re recording (mm.addFrame()). The last significant detail of the sketch is that we use key events to give ourselves a way of stopping and completing the movie. When someone hits the spacebar, the movie will stop recording and save the file. Also, we have to remember to do some Kinect cleanup on exit or else we’ll get some funky errors whenever we stop our sketch.

Here’s an example of what a movie recorded with this sketch looks like:

Now, if you don’t have a Kinect, or you’re having trouble recording a depth movie, don’t despair! You can still play along with the next step. You can download that depth movie of me doing jumping jacks straight from Vimeo: Kinect Test Movie for Laser Zoetrope. I’ve also uploaded the depth movie I used for the final laser zoetrope shown above if you want to follow along exactly: Kinect Depth Test Movie. That later movie features Zach Lieberman, an artist and hacker in New York and one of the co-founders of OpenFrameworks, a C++-based cousin of Processing.

Creating the Laser-Cutter File

Now that we’ve got a depth movie, we need to write another Processing sketch that processes that movie, chooses frames for our animation, finds the outlines of our figure, and saves out a vector file that we can send to the laser cutter.

To accomplish these things, we’re going to use the Processing OpenCV library and Processing’s built-in beginRaw() function. Create a new Processing sketch, save it, create a “data” folder within the sketch folder, move your depth movie into there (named “test_movie.mov”), and paste the follow source code into your sketch (or download it from the lasercut_zoetrope_generator.pde file):

Discussion

If you run this sketch with the second test movie I linked above, it will produce the following output:

…and will also save a file called “full_output.dxf” in the sketch folder. This is the vector file we can bring into Illustrator or any other design program for final processing to send to the laser cutter.

Now, let’s look at the code.

In setup, we load the test_movie.mov file into OpenCV, something that should be familiar from past posts on OpenCV. We also call beginRaw(), a Processing function for creating vector files. beginRaw() will cause our sketch to record all of its output into a new vector file until we call endRaw(), that way we can build up our file over multiple iterations of the draw loop. In this case we’re creating a DXF file rather than a PDF because this format is easier to process for the laser which needs continuous lines in order to produce a reliable output. PDFs produced by Processing tend to have many discrete line segments which can cause funky results when cut with the laser, including slower jobs and uneven thickness.

Now, before we dive into the draw method, a bit about the approach. We want to pull out 12 different frames from our movie, that would make good frames for our animation. Then we want to have OpenCV extract their outlines (or “contour” in OpenCV parlance), and finally we want to draw those in a grid across the screen so they don’t overlap and the final DXF file will contain all the frames of the animation.

This sketch approaches these problems by creating a “currentFrame” variable that’s defined outside the draw loop. Then, on each run of the draw loop, that variable gets incremented and we use it to do everything we need: jump forward in the movie, move around to a different area of the sketch to draw, etc. Finally, once we’ve finished drawing all 12 frames to the screen, we call “endRaw()” to complete the DXF file, just as we called “mm.finish()” in the first sketch to close the movie file.

So, given that overall structure, how do we draw the contour for each frame? Let’s look at the code:

   opencv.jump(0.3 + map(currentFrame * timeBetweenFrames, 0, 9, 0, 1));
   opencv.read();

This tells OpenCV to jump forward in the movie by a specific amount of time. The 0.3 is the starting point of the frames we’re going to grab and is something I figured out by guess-and-check. I tried a bunch of different values, running the sketch each time and seeing what frames I ended up with and judging whether they’d make a good animation. “0.3” represents that starting time in seconds.

We want all of our frames to be evenly spaced so our animation plays back cleanly. To achieve this, we add an increasing amount to our jump of 0.3 based on which frame we’re on. Once we’ve calculated the right time, we read the frame of the movie using “opencv.read()”

The next few lines use the modulo operator (“%”) with the currentFrame number in order to draw the frames in a four by three grid. Then, there’s a simple looking OpenCV call that actually is pretty cool given the context:

  opencv.threshold(150);

This tells our opencv object to flatten the frame to a pure black and white image, eliminating all shades of gray. It decides which parts to keep based on the grayscale value we pass in, 150. But since the grayscale values in our depth image correspond to the actual physical distance of objects, in practice this means that we’ve eliminated anything in the image further away than a couple of feet, leaving just our subject isolated in the image.

If you’re using your own depth image, you’ll want to experiment with different values here until you’re seeing a silhouette that just represents the figure that you want to capture in animation.

The next few lines, wrapped between calls to “pushMatrix()” and “popMatrix()” are probably the most confusing in the sketch. Thankfully, we can break them down into two parts to understand them: moving and scaling our image and drawing the silhouette calculated by OpenCV.

The first three lines of this section don’t do anything but change our frame of reference. pushMatrix() and popMatrix() is a strangely-named convention that makes complicated drawing code significantly easier. What it lets us do is temporarily change the size and shape of our Processing sketch so that we can use the same drawing code over and over to draw at different scales and on different parts of the screen.

  pushMatrix();
      translate(x + 20,y);
      scale(0.2);

Here’s how it works. First we call pushMatrix(), which means: “save our place” so we can jump back out to it when we call popMatrix(). Then we call “translate()” which moves us to a different part of the sketch using the x and y variables we set above based on our current frame. Then we call “scale()” so that anything else we draw until the next popMatrix() will be 20 percent the size it would normally be.

The result of these three lines is that we can do the OpenCV part that comes next — calculating and drawing the contour — without having to think about where on screen this is taking place. Without pushMatrix we’d have to add our x and y values to all of our coordinates and multiply all of our sizes by 0.2. This makes things much simpler.

Now, the OpenCV code:

  Blob[] blobs = opencv.blobs( 1, width*height/2, 100, true, OpenCV.MAX_VERTICES*4 );
      for( int i=0; i<blobs.length; i++ ) {
          beginShape();
          for( int j=0; j<blobs[i].points.length; j++ ) {
              vertex( blobs[i].points[j].x, blobs[i].points[j].y );
          }
          endShape(CLOSE);

          rect(120, height -2, 220, 50);
      }

This code certainly looks complicated, but it’s not all that bad. The most interesting line is the first one, which calls “opencv.blobs()”. That function analyzes the image we’ve stored and looks for areas that are continuous, where all the adjacent pixels are the same color. In the case of our example movie, there will be exactly one blob and it will be around Zach’s silhouette. Our use of the threshold eliminated everything else from the scene. If you’re using my other example movie or your own depth movie, you may have multiple blobs and that’s OK, you’ll just end up with a more complicated vector file.

And once we get down to it, drawing these blobs isn’t too bad, either. We loop over the array of them and each blob has a points array inside of it that we access in order to create vectors. Basically, we’re playing connect the dots: go from each point to the next drawing lines between them until we complete the whole shape.

And that’s all there is to generating the DXF file.

Preparing for the Laser

After generating this DXF file, you’ll need to bring it into Illustrator or your favorite vector editing program to perform some basic cleanup: group each frame together into a single object, cut out the parts of the silhouettes that overlap the rectangle so that the figure will actually be attached to its base, etc. I also selected 9 of these twelve frames and then duplicated them so that I’d have a looping animation rather than one that reset back to a starting posture. I’ve uploaded the final Illustrator file here for you to look at: contour_animation_for_laser.ai

Once we’ve got the contours cutout, the last step is to design and cut the wheel that they’ll spin on. I acquired a thrust bearing (a kind of engineer’s lazy susan) that would allow my disc to spin freely. My bearing included holes on top for attaching things to it. I measured the distance between those and then put together a design for a disc that could mount onto the bearing and hold each of the frames of the animation:

Contour disc for laser

Getting just the right size for the slots so that the silhouettes would press fit in tightly without any glue took a little bit of experimentation and some false starts on the laser. You can download the Illustrator file for this design here: contour_disc_for_laser.ai

Once you’ve got those two Illustrator files, cutting them out on the laser is pretty much just as easy as hitting print. Literally: you actually start the process by hitting print in Illustrator. You’ve got to fill in a few additional details about the laser’s power and speed settings, but then you’re off to the races. The laser looks like this in action (not cutting a zoetrope part in this case, but it’s the same laser):

Hopefully, this tutorial has given you enough of what you need to start recording Kinect depth data and using it to generate laser-cutable vector files. Have fun!

Get Your Own Laser-cut Zoetropes!

In response to all of the great reactions to this project, I’ve started up a site to actually produce laser-cut zoetropes for purchase: PhysicalGIF.com. We’re offering kits for putting together zoetropes from designer-made animated GIFs. The kits will come with everything you need to assemble a zoetrope like the one shown here: the laser-cut parts, the base, even the strobe light. Eventually you’ll even be able to upload your own GIFs to have them converted into physical form. Head over there now to sign up to be notified when the kits become available.

More:
Check out all of the Codebox columns here
Visit our Make: Arduino page for more on this popular hobby microcontroller

In the Maker Shed:

Makershedsmall

processingCover.jpg

Getting Started with Processing
Learn computer programming the easy way with Processing, a simple language that lets you use code to create drawings, animation, and interactive graphics. Programming courses usually start with theory,but this book lets you jump right into creative and fun projects. It’s ideal for anyone who wants to learn basic programming, and serves as a simple introduction to graphics for people with some programming skills.

12 thoughts on “Making a Laser-cut Zoetrope with Processing and Kinect

  1. Anonymous says:

    thanks for the interesting workshop. I’m an animation filmmaker that’s been working with this stuff for a while, and there’s a long history of what is called “rotoscoping” (created by Max Fleischer of the Fleischer Bros animation studio!) to trace off live action references for animation. Also, here’s my device for a super simple synch-strobe optical toy, no scripting involved and easy to make: http://www

    1. Anonymous says:

      That’s awesome, Lorelei! I will do a separate post about your sync-strobe.

  2. Zeotrope « pcomadventures says:

    […] that i found, which explains how to transform your any body movement into a working zoestrope. Here. There have used Processing and Kinect in a very interesting way, worth giving a shot I say!  […]

  3. Zoetrope « pcomadventures says:

    […] that i found, which explains how to transform your any body movement into a working zoestrope. Here. There have used Processing and Kinect in a very interesting way, worth giving a shot I say!  […]

  4. לא, אני לא מיזונטרופ – אני "סתם" בונה זיאוטרופ - מידברן - Midburn says:
  5. לא, אני לא מיזונטרופ - אני "סתם" בונה זיאוטרופ | מידברן )'( Midburn says:

Comments are closed.

Discuss this article with the rest of the community on our Discord server!
Tagged

ADVERTISEMENT

Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).

FEEDBACK