Make a networked face sensor with Processing

Arduino
Make a networked face sensor with Processing
MZ_Codebox.gif

This Codebox explores how to use a web server to collect sensor data from a Processing sketch. As an example, we’ll use OpenCV to periodically detect and report number of faces that appear in your webcam’s field of view. You might use something like this at a conference or art show to see how many people are interested in a particular session or exhibit. Using this data feed, you might then create a mobile app that would show you where the most people are at a conference at any given time.

Set up the web server

You’ll need access to a web server with PHP to do this project. (PHP is a scripting language for creating web sites is offered by almost every hosting company.) If you don’t have an account with a hosting company, you can simply set up a web server on your own machine. In fact, most new Macs come with Apache, one of the most popular web servers, pre-installed. All you have to do is activate it, which you can do through your computer’s “System Preferences” panel. Just open the preferences, click “Sharing,” and then check the “Web Sharing” from the list at the left. (If you’re on Windows, you’ll need to install an external server. I’ve had great success with XAMPP, which has everything you’ll need.)

If all goes well, you’ll see the “Web Sharing: On” status indicator light up in green. You’ll also see the address you can use in your browser to access the “root” of your web site. (Make a note of this address, as we’ll need it later in the Processing sketch.) The following figure shows you more detail:

ano_enable_mac_webserver_113010.png

Once sharing is enabled, you’re ready to set up the PHP script that will log the data. This is done by adding a few files in the web server’s root directory. On a Mac, this is usually the “Sites” directory, which is in the same spot as your Music, Movies, and Documents directories.

To start setting up the script, drop into a terminal and type these commands:

cd ~/Sites/
mkdir face_sensor
cd face_sensor
touch sensor_log.txt
chmod 777 sensor_log.txt


In this sequence of commands, we’re changing to the “root” directory where the Mac’s webserver expects to find files, creating a new directory called “face_sensor,” and then adding a blank log file that our PHP script can write to. This last step happens in two parts: the first is the use of the touch command to create a new, empty log file. The second step is to use the chmod command to set the permissions on the file so that PHP can write data to the file.

Next, you need to configure your system so that the server can execute PHP scripts. SerpicoLugNut on Stackoverflow has a great description of how to do this at Easiest way to activate PHP and MySQL on Mac OS 10.6 (Snow Leopard)?. Here’s what he says to do:


Open a good text editor (I'd recommend TextMate, but the free TextWrangler or vi or nano will do too), and open:

/etc/apache2/httpd.conf

Find the line: "#LoadModule php5_module libexec/apache2/libphp5.so"

And uncomment it (remove the #).

Once the directory is ready, save the following PHP to a file called record.php:

Once you’ve copied the file, go to your browser and enter the following URL:


http://your personal website address from the sharing panel/face_sensor/record.php?face_count=5&room_name=ballroom&interval=2000

Your browser should say “OK.” If you open the sensor_log.txt file, you should see a line that looks like this:


01-12-2010 06:01:41 EST ballroom 5 2000

So, what’s happening? Basically, the PHP script is simply pulling out the values we’re put in the URL’s query string (e.g., face_count, room_name, and interval) and writing them out to a tab-delimited file. That’s it. (Well, OK, it’s also adding the date and time.)

In a “real” system, you’d most likely write these values to a database, but that’s beyond the scope of this post — what mainly want to do is show you how to use Processing to send data to a web site, not the particulars of how that web site records the data. If you’re interested in going further on the back end piece, Kevin Yank’s article Build Your Own Database Driven Web Site Using PHP & MySQL, Part 1: Installation is a good place to start.

Set up the sketch

Now that the web server is set up and is able to log data, the next step is to build the Processing sketch that detects faces and reports them back to your server. Before you start, make sure that you’ve installed the controlP5 and the OPENCV Processing external libraries. (If you’re unfamiliar with external libraries, check out How to Import Libraries in Processing on O’Reilly Answers.) Once the libraries are installed, fire up Processing and paste the code for networked_face_sensor.pde into the sketch window:

In addition, you’ll need to modify the transmission_url to set it to the address of your PHP script. The line


String transmission_url = "http://MacOdewahn.home/~odewahn/face_sensor/record.php";

must be updated to


String transmission_url = "http://your personal website address from the sharing panel/face_sensor/record.php";

When you start the sketch, you’ll notice that you must first pick the source video you want to use, which is done with the command cam.settings();. This allows you to hook up an external web camera, rather than just use the built-in webcam. After you’ve selected the source, you should see the video feed.

The face detection and reporting process occurs every 2 seconds, as specified in the interval variable. You can also enter a “room name” so that you you can distinguish the multiple data sources. For example, you might have one camera running in your living room, and one in your kitchen, with both reporting back to the same central server.

After a few seconds or so, you can open the log in your web browser and see the data that your sensors have reported. The URL for this is:


http://your personal website address from the sharing panel/face_sensor/sensor_log.txt

It will look something like this:

ano_sensor_log_120110.png

Discussion

As you can see, the Processing code is very similar to the projects we’ve explored in other posts. Basically, all we’re doing here is adding a new method, called transmit(), to report the data back to the server. transmit()‘s job is to create a URL with all information that is required by our PHP script. Note how each field name in the query string — face_count, interval, and room_name — is consistently used in both the Processing and PHP script:

ano_processing_to_php_113010.png

Once we have this URL, Processing’s built in loadStrings() command executes the PHP script on our server to save the information. Note that we don’t actually care about the results in this example; only that the page is called. However, if we wanted to get more sophisticated, we could have the PHP script return a status code or some other information.

The final point worth noting is how we’ve used Java’s URLEncoder utility class to encode each of the fields. This ensures that the data is correctly transmitted to the server. Once again, we’re taking advantage of some of the powerful goodies that are available in Java to make our life in Processing much easier!

In the Maker Shed:

Makershedsmall

processingCover.jpg

Getting Started with Processing
Learn computer programming the easy way with Processing, a simple language that lets you use code to create drawings, animation, and interactive graphics. Programming courses usually start with theory,but this book lets you jump right into creative and fun projects. It’s ideal for anyone who wants to learn basic programming, and serves as a simple introduction to graphics for people with some programming skills.

6 thoughts on “Make a networked face sensor with Processing

  1. mgua.myopenid.com says:

    Processing is great! OpenCV library is a real wealth of interesting algorithms and ready-made code.

    I too did some hacking with Processing and the openCV library, and built a face tracker, that points a laser pointer towards a detected face, using an external arduino board to move two servos.

    You can find my code and a short (and quite blurry) video here:
    http://marco.guardigli.it/2010/01/arduinoprocessing-face-follower.html

    Another project using Processing and OpenCV (but not using the arduino board) is an application to change the picture on the screen basing on the user face position, as captured by a webcam.
    This gives the user a three dimensional perception of the screen contents.
    You can find code and description here:
    http://marco.guardigli.it/2010/01/screen-view-change-basing-on-user-face.html

    Happy Making!

    Marco (@mgua)

    .

    1. Andrew Odewahn says:

      @Marco —

      Love the laser tracker — definitely in the running with scissors category, though! It reminds me of this great exhibit by Chris O’Shea called “Audience” that had a bunch of small, face-tracking mirrors that watched the audience as they watched them. There’s a video here:

      http://www.chrisoshea.org/audience

      I wonder it your code would do the same thing if you replaced the laser with a mirror. Plus, no one would get their eyes put out!

      1. mgua.myopenid.com says:

        Andrew,

        You are right. Laser are dangerous. This is why I was using it with the cardboard face. :-)

        I needed a way to make it interesting for the kids.
        Definitely a mirror could be used too, but it would not be that easy to move a large mirror with two small servos.

        Another interesting idea would be to train the HAAS classifier in the openCV to recognize a specific and unusual object, like a yellow glove, so to be able to detect via the webcam its position in the scene. Then the laser could be used to point at specifically defined targets, maybe identified by the glove position when the user says a specific word, that is picked up by the computer mic.

        This would allow an easy creation of a sort of space and object perception game, (cold be great for classrooms), where the teacher wearing the glove could point and define the names of different objects, and the software could associate the coordinates with the object name.

        The trained computer could also “learn” the object names and characteristics studying the images and training the HAAS classifier (this is a little boring).

        The whole thing could come out quite easily and with a natural interface. I guess a PTZ (pan tilt zoom) cam could do much better than a standard one.

        Who is going to try and accept the challenge?

        Marco (@mgua)

Comments are closed.

Discuss this article with the rest of the community on our Discord server!
Tagged

ADVERTISEMENT

Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).

FEEDBACK