The EnableTalk system uses a glove-mounted microcontroller to collate information from a passel of onboard sensors—11 flex sensors, 8 touch sensors, 2 accelerometers, a compass, and a gyroscope—and transmit it wirelessly to a nearby computer or smartphone for translation into machine generated speech. The Ukrainian development team—Posternikov Anton, Maxim Osika, Anton Stepanov, and Valera Yasakov—recently won the 2012 Microsoft Imagine Cup’s 1st place “Software Design” award, and took home $25K. Their prototype includes solar panels to help prolong battery life. [Thanks, Tim!]
More:
6 thoughts on “Glove Based Sign-to-Speech System”
Comments are closed.
ADVERTISEMENT
Join Make: Community Today
Now Amy can tell people about the scary gorillas (ref: “Congo” 1995).
Please read http://www.bda.org.uk/news_two/20aug.php
“The British Deaf Association (BDA) is concerned about this development for two reasons.
The first is that any sign language is far more linguistically complex and sophisticated than just a pair of hands. The gloves cannot convey what is on the face, on the lips, and how the signs are given meaning in different locations. The gloves may help with simple gestures but not sign language.”
Thank you for this, Mike. I did some brief reading about sign language before I wrote this, and of course that reading did include mention of the fact that signing involves well more than just hand gestures. This should’ve occurred to me.
I wondered, when I first saw this, about the clumsiness of a glove-based interface, and had the thought that when and if signs-to-speech tech does emerge, it will probably be based on real-time or near-real-time software parsing of video of the person signing, which will A) account for motions and expressions from the whole upper body and face, and B) not require that the signer wear gloves.