Making Fun: Computer Vision Hair Trimmer

Arduino Technology
Making Fun: Computer Vision Hair Trimmer
YouTube player

Part of what makes me a maker is that I prefer to do things myself when I can. I even cut my own hair. Nothing fancy, mind you, just a quick buzz cut with a trimmer. The tricky part, though, is cutting a good line across the back of the neck. It’s not only hard to trim, even using multiple mirrors, but it’s quite obvious when I haven’t been keeping up. Usually I ask my wife to help me, but I like to be self-reliant, and decided to see if there was a fun and educational way to trim it myself.

Pondering this problem, the two things that came to my mind were computer vision and automated heavy equipment. I figured I could use computer vision to track my head and the trimmer, and besides, I have long been looking for an excuse to learn about computer vision. I thought of construction equipment as a model because some control systems for bulldozers use GPS to locate the machine, and then adjust the blade according to the requirements of the site plan. In theory, the bulldozer operator could just drive back and forth over the site many times, and the control system would handle the height of the blade to result in a perfectly sculpted site. I set out to build a trimmer that I could blindly run up and down the back of my neck, and have it automatically turn on or off in accordance with its position.

Computer Hair Trimmer Diagram

My research turned up a great open source computer vision framework by the name of reacTIVision. It includes special graphics called fiducial markers that are printed out and affixed to physical objects for recognition and tracking. Though designed for multitouch tables, the system is flexible enough to be used at greater distances, as long as the markers are sized appropriately. All I had to do was affix some markers to the trimmer and my head, write a Processing sketch to obtain the X and Y coordinates of the markers from the reacTIVision application, and then use an Arduino to control the trimmer’s battery power as necessary.

Not only is this project rather straightforward, but it’s inexpensive, as well. The reacTIVision library is free to use and the markers can be printed at home. I found the construction helmet discarded in the woods behind my house, but you can find them for $5-$15 at your local home store. Hard hats are great for this because they have an adjustable head harness on the inside and plenty of space for mounting on the outer shell. I recommend using a small battery-powered trimmer ($20) and NOT messing around with something powered by a wall outlet. Any Arduino board could control a relay to control the trimmer, but I used the Arduino Micro because it’s small and has a USB jack on the board. I didn’t write any Arduino code, just uploaded the Firmata sketch from the Arduino IDE, the one that lets the Arduino be controlled from a Processing sketch.

The reacTIVision application monitors the laptop’s camera and sends out the positions of the fiducials. The Processing sketch listens to the fiducial position reports from the reacTIVision application and calculates the line between the position of the helmet’s left fiducial marker and the position of the helmet’s right fiducial marker. It then runs the position of the trimmer’s fiducial marker through a formula that determines if the trimmer is either above or below the line between the helmet’s markers. If any of the fiducials are missing, it displays “ERROR” and turns the trimmer off. If the trimmer is above the cut line, it displays “KEEP” and turns the trimmer off. If the trimmer is below the cut line, it displays “CUT” and turns the trimmer on. The sketch is as follows, and is merely a lightly modified version of the example code included with reacTIVision:

import TUIO.*;
import java.util.*;
import processing.serial.*;
import cc.arduino.*;
Arduino arduino;
TuioProcessing tuioClient;
boolean trimmer_state = false;
float cursor_size = 15;
float object_size = 60;
float table_size = 760;
float scale_factor = 1;
PFont font;

void setup()
{
 size(800,600);
 noStroke();
 fill(0);
 loop();
 frameRate(30);
 hint(ENABLE_NATIVE_FONTS);
 font = createFont("Arial", 18);
 scale_factor = height/table_size;
 tuioClient = new TuioProcessing(this);
 println(Arduino.list());
 arduino = new Arduino(this, Arduino.list()[0], 115200);

 for (int i = 0; i <= 13; i++)
   arduino.pinMode(i, Arduino.OUTPUT);
}

void draw()
{
 background(255);
 textFont(font,18*scale_factor);
 float obj_size = object_size*scale_factor; 
 float cur_size = cursor_size*scale_factor; 
 TuioObject l_ear = null;
 TuioObject r_ear = null;
 TuioObject trimmer = null;
 Vector tuioObjectList = tuioClient.getTuioObjects();
 for (int i=0;i<tuioObjectList.size();i++) {
   TuioObject tobj = (TuioObject)tuioObjectList.elementAt(i);
   stroke(0);
   fill(0,0,0);
   pushMatrix();
   translate(tobj.getScreenX(width),tobj.getScreenY(height));
   rotate(tobj.getAngle());
   rect(-obj_size/2,-obj_size/2,obj_size,obj_size);
   popMatrix();
   fill(255,0,0);
   switch (tobj.getSymbolID()) {
     case 0: 
       text("Left", tobj.getScreenX(width), tobj.getScreenY(height));
       l_ear = tobj;
       break;
     case 1: 
      text("Right", tobj.getScreenX(width), tobj.getScreenY(height));
       r_ear = tobj;
       break;
     case 2: 
       text(tobj.getAngle(), tobj.getScreenX(width), tobj.getScreenY(height));
       trimmer = tobj;
       break;
   }
 }

 textFont(font,148*scale_factor);
 if ( (l_ear == null) || (r_ear == null) || (trimmer == null) ) {
   arduino.digitalWrite(12, Arduino.LOW);
   fill(0,255,0);
   text("ERROR", 30, 150);
 } else if (linePointPosition2D( l_ear.getScreenX(width), l_ear.getScreenY(height), r_ear.getScreenX(width), r_ear.getScreenY(height), trimmer.getScreenX(width), trimmer.getScreenY(height)) > 0) {
   arduino.digitalWrite(12, Arduino.HIGH);
   fill(255,0,0);
   text("CUT", 30, 150);
 } else {
   arduino.digitalWrite(12, Arduino.LOW);
   fill(0,255,0);
   text("KEEP", 30, 150);
 }

 Vector tuioCursorList = tuioClient.getTuioCursors();
 for (int i=0;i<tuioCursorList.size();i++) {
   TuioCursor tcur = (TuioCursor)tuioCursorList.elementAt(i);
   Vector pointList = tcur.getPath();
   if (pointList.size()>0) {
     stroke(0,0,255);
     TuioPoint start_point = (TuioPoint)pointList.firstElement();;
     for (int j=0;j<pointList.size();j++) {
       TuioPoint end_point = (TuioPoint)pointList.elementAt(j);
       line(start_point.getScreenX(width),start_point.getScreenY(height),end_point.getScreenX(width),end_point.getScreenY(height));
       start_point = end_point;
     }
     stroke(192,192,192);
     fill(192,192,192);
     ellipse( tcur.getScreenX(width), tcur.getScreenY(height),cur_size,cur_size);
     fill(0);
     text(""+ tcur.getCursorID(), tcur.getScreenX(width)-5, tcur.getScreenY(height)+5);
   }
 }

}

// these callback methods are called whenever a TUIO event occurs
// called when an object is added to the scene
void addTuioObject(TuioObject tobj) {
 println("add object "+tobj.getSymbolID()+" ("+tobj.getSessionID()+") "+tobj.getX()+" "+tobj.getY()+" "+tobj.getAngle());
}
// called when an object is removed from the scene
void removeTuioObject(TuioObject tobj) {
 println("remove object "+tobj.getSymbolID()+" ("+tobj.getSessionID()+")");
}
// called when an object is moved
void updateTuioObject (TuioObject tobj) {
 println("update object "+tobj.getSymbolID()+" ("+tobj.getSessionID()+") "+tobj.getX()+" "+tobj.getY()+" "+tobj.getAngle()
 +" "+tobj.getMotionSpeed()+" "+tobj.getRotationSpeed()+" "+tobj.getMotionAccel()+" "+tobj.getRotationAccel());
}
// called when a cursor is added to the scene
void addTuioCursor(TuioCursor tcur) {
 println("add cursor "+tcur.getCursorID()+" ("+tcur.getSessionID()+ ") " +tcur.getX()+" "+tcur.getY());
}
// called when a cursor is moved
void updateTuioCursor (TuioCursor tcur) {
 println("update cursor "+tcur.getCursorID()+" ("+tcur.getSessionID()+ ") " +tcur.getX()+" "+tcur.getY()
 +" "+tcur.getMotionSpeed()+" "+tcur.getMotionAccel());
}
// called when a cursor is removed from the scene
void removeTuioCursor(TuioCursor tcur) {
 println("remove cursor "+tcur.getCursorID()+" ("+tcur.getSessionID()+")");
}
// called after each message bundle
// representing the end of an image frame
void refresh(TuioTime bundleTime) { 
 redraw();
}
/**
 * Line is (x1,y1) to (x2,y2), point is (x3,y3).
 * 
 * Find which side of a line a point is on. This can be done by assuming that the line has a direction, 
 * pointing from its start to its end point. The functions given here will return a negative value ( < 0 ) 
 * if the point is "to the left" of the line, zero ( == 0 ) if the point is on the line and a positive 
 * value ( > 0 ) if it's on "the right".
 * 
 * http://wiki.processing.org/w/Find_which_side_of_a_line_a_point_is_on
 */
float linePointPosition2D ( float x1, float y1, float x2, float y2, float x3, float y3 )
{
 return (x2 - x1) * (y3 - y1) - (y2 - y1) * (x3 - x1); 
}

I had to run an older version of Processing to get things working on my old Mac running OS 10.6. I also had to try a couple different versions of Firmata on the Arduino, but with different hardware, your mileage may vary. Don’t be afraid to experiment.

On the subject of older computers, mine was only able to process the video at a rate of 20 frames per second. It’s my belief that the resultant haircut would be greatly improved if I could get this number higher. If you watch the video closely, you’ll see one of my “big misses” where I have the trimmer in the “cut” zone and my twitchy hand moves it quickly into the “keep” zone and into my head before the trimmer is turned off. Moving my hand slower would help. I was hoping to build this system based on a Raspberry Pi or BeagleBoard, but when I saw how slowly my laptop and desktop were processing frames, I decided to stick with the laptop for a balance of power and portability.

As you can see from the last shot of the video, I wasn’t able to get a perfect haircut out of the first system test. There’s always next month’s haircut, though, and I’m hoping that I’ve shown enough for you to get started while leaving enough out that you’ll have your own adventure in building and improving this newly invented computer vision hair trimming technology.

See the entire series here.

6 thoughts on “Making Fun: Computer Vision Hair Trimmer

  1. Sudheer says:

    Oh! It is really amazing. it is a little bit difficult to manage in the beginning. But later I think it would become easy and very comfort to proceed in the home itself. I like it very much. But, Isn’t there any drawback with this machine?

    1. Jeff Highsmith says:

      The only drawbacks are the need for good lighting and a fast computer. There may be a little drift in accuracy when the fiducials aren’t all in the same plane, but the biggest errors come from moving too fast. If you want a “tapered nape” (smiling) instead of a “block nape” (straight across), you’ll need to calculate the distance from the camera to the head using the distance between the helmet’s fiducials so you can do measurements, and figure the arc upon which to cut. Give it a try and let me know how it works for you.

      1. Sudheer says:

        Fine. I have to try it once and let you know my experience.

Comments are closed.

Discuss this article with the rest of the community on our Discord server!
Tagged

I enjoy inventing new and fun gadgets. I pick projects that are challenging, fun, and educational.

View more articles by Jeff Highsmith

ADVERTISEMENT

Maker Faire Bay Area 2023 - Mare Island, CA

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 15th iteration!

Buy Tickets today! SAVE 15% and lock-in your preferred date(s).

FEEDBACK