A rendering of an operating room of the future, with Tamara’s devices installed.
Student Tamara Worst has completed a fascinating project with Interaction Design group IxD Hof, in cooperation with Siemens Healthcare, that frees doctors from using their hands for menial tasks during operations. Siemens was looking for a non-contact interaction solution for surgeons in the operating room, so Tamara used two 3D sensors, a Wii remote and Microsoft Kinect, to detect the orientation and position of the surgeon’s right foot. She then wrote software in Processing to allow them to view and manipulate images and 3D models without the use of their hands.
A Wii Remote attached to the right shoe. In the future Tamara hopes to replace the bulky remote with a single accelerometer.
The Wii remote and Kinect sensor each accomplish a different task – the Wii remote measures the orientation of the doctor’s foot, allowing precise control over the view and orientation of images and models. The Kinect sensor detects the position of the surgeon’s right foot, allowing him or her to accomplish different tasks, depending on where the foot lies.
Image Actions:
- Browse catalog of images – lean foot forward and back to scroll forward and back.
- Change contrast of images – lean foot left and right to increase and decrease contrast.
- Zoom – zoom in by stepping once forward, zoom out by stepping once backward.
The doctor can switch between 3D and 2D views by moving his or her foot one step to the right and leaning forward and back. Moving back to the “home” position will select the current mode.
3D Model Actions:
- Rotate object – lean foot left and right.
- Tilt object – lean foot forward to rotate away and back to rotate towards.
In this lengthy post, Tamara goes in depth into her prototyping process. She initially used simple pressure sensors to detect different foot movements and orientations but found that the wires required to connect to the sensors were clunky, annoying, and restricted movement of the surgeon in the operating room.
I particularly like one of the possible improvements she suggests: using a wireless multitouch system in the shoe to detect different doctors based on the force profile created by their feet. This would allow each doctor to pair specific functions and foot orientations at will; the device could then recognize different doctors and change the computer interface and features automatically.
How would you use these two powerful sensors to accomplish a helpful task in the OR? In what other fields would a 3D model viewer like this be useful? Share your thoughts with a comment below.
ADVERTISEMENT