Find all your DIY electronics in the MakerShed. 3D Printing, Kits, Arduino, Raspberry Pi, Books & more!

The EnableTalk system uses a glove-mounted microcontroller to collate information from a passel of onboard sensors—11 flex sensors, 8 touch sensors, 2 accelerometers, a compass, and a gyroscope—and transmit it wirelessly to a nearby computer or smartphone for translation into machine generated speech. The Ukrainian development team—Posternikov Anton, Maxim Osika, Anton Stepanov, and Valera Yasakov—recently won the 2012 Microsoft Imagine Cup’s 1st place “Software Design” award, and took home $25K. Their prototype includes solar panels to help prolong battery life. [Thanks, Tim!]

EnableTalk project page

More:

Android Sign Language Interpreting Glove

Sean Michael Ragan

I am descended from 5,000 generations of tool-using primates. Also, I went to college and stuff. I write for MAKE, serve as Technical Editor for MAKE magazine, and develop original DIY content for Make: Projects.


Related

Comments

  1. Greg Meece says:

    Now Amy can tell people about the scary gorillas (ref: “Congo” 1995).

  2. Mike says:

    Please read http://www.bda.org.uk/news_two/20aug.php

    “The British Deaf Association (BDA) is concerned about this development for two reasons.

    The first is that any sign language is far more linguistically complex and sophisticated than just a pair of hands. The gloves cannot convey what is on the face, on the lips, and how the signs are given meaning in different locations. The gloves may help with simple gestures but not sign language.”

    1. Sean Ragan says:

      Thank you for this, Mike. I did some brief reading about sign language before I wrote this, and of course that reading did include mention of the fact that signing involves well more than just hand gestures. This should’ve occurred to me.

      I wondered, when I first saw this, about the clumsiness of a glove-based interface, and had the thought that when and if signs-to-speech tech does emerge, it will probably be based on real-time or near-real-time software parsing of video of the person signing, which will A) account for motions and expressions from the whole upper body and face, and B) not require that the signer wear gloves.