Find all your DIY electronics in the MakerShed. 3D Printing, Kits, Arduino, Raspberry Pi, Books & more!

Hapticradar

The University of Tokyo’s Ishikawa Komuro Laboratory focusses their research on sensory information and relevant technologies. The lab’s “Haptic Radar / Extended Skin Project” uses body mounted range sensors and small vibrating motors to alert the wearer of any approaching objects -

We are developing a wearable and modular device allowing users to perceive and respond to spatial information using haptic cues in an intuitive and unobstrusive way.The system is composed of an array of “optical-hair modules”, each of which senses range information and transduces it as an appropriate vibro-tactile cue on the skin directly beneath it. An analogy for our artificial sensory system in the animal world would be the cellular cilia, insect antennae, as well as the specialized sensory hairs of mammalian whiskers.

The Haptic Radar / Extended Skin Project

Collin Cunningham

Born, drew a lot, made video, made music on 4-track, then computer, more songwriting, met future wife, went to art school for video major, made websites, toured in a band, worked as web media tech, discovered electronics, taught myself electronics, blogged about DIY electronics, made web videos about electronics and made music for them … and I still do!


Related

Comments

  1. BigD145 says:

    We don’t have whiskers or antennae. Our brains aren’t wired to react adequately enough on that front. However, we do have hair and are wired to react to changes in that. The headband would extend the hairline, but it’ll be a bit of leap to figure out exactly which way to move. I doubt you could avoid someone trying to punch you, which is equivalent to having a stray I-beam swinging at high velocity.

  2. Master Higgins says:

    My spidey-sense is tingling.

  3. PJ says:

    @BigD145
    No Whiskers or Antennae? Our bodies are covered with whiskers (hair) and we have many feeler antennae (fingers, toes). The brain is a very plastic feature – it’s not “wired” where a specific sensory input results in a precise output. Modern theories model the brain as a pattern recognition model. It’s not so much where the input comes from (eyes, skin, tongue) – it appears to be more that the input matches the same pattern.

    This concept is not new/novel (the use of range sensors might be) – here’s a paper published 10 years ago. And they used electro stimulation on the tongue to stimulate vision simulation of objects.

    http://cat.inist.fr/?aModele=afficheN&cpsidt=1838034

    And here’s a more recent introductory paper
    http://www.sigaccess.org/newsletter/sept06/Sept06_04.pdf

    Perhaps it can’t prevent a swinging I-beam accident now – but it sure as hell won’t if prior work and research isn’t done.

In the Maker Shed