What works well for a gaming input device also appears to be a useful tool for human robot interaction. University of Calgary students Cheng Guo and Ehud Sharlin performed a study in which participants tried to pose Aibos and navigate them through obstacles using both keyboards and a Wiimote-based gesture interface:

For the navigation tasks we did not expect that there would
be a significant difference between the numbers of errors
participants made using the different techniques. However,
the data showed the opposite. Participants made 43% more
errors with the keypad interface than with the Wiimote
interface in the navigation tasks. Many participants felt that
this was due to the small key size and the unintuitive
mapping between buttons and robot actions.

Moreover, gesture input tends to support simultaneous input
compared to button input. As one of the participants
commented, “I could do both hands (both arm movements)
at the same time without a lot of logical thinking (with the
Wiimote/Nunchuk interface), where with the keyboard I
had to press one (button) and the other (button) if I was
doing two hand movements at the same time. Although they
would be intime.”

I wonder if the same holds true as the gestures become more complex to support a larger command set. For a reduced set of instructions, however, this really makes a lot of sense. Knowing the spatial position of your hands is completely second-nature, making simple gestures much more intuitive than pressing keys on a keyboard.

Wiimote controlled Aibo – Link, Paper (PDF)