
Implementation of Virtual Chopsticks
A conventional glove-type hand gesture input device is used to measure the motion of the userfs hand and fingers while handling the virtual chopsticks. But if a one-to-one mapping between the corresponding joint angles is used, the output motion is not always the same as the input motion because of measurement errors, differences of skeletal structures, and so on. The problem is even more serious when the motion data is used in an interactive environment such as a virtual reality system. The motion that a user imagines (mental model) is deviated from the reproduced motion (system behavior), and this seriously degrades the intuitiveness, making the user interface hard to use.
Therefore, we propose a technique to use a multiple regression formula that establishes the relationship between multiple motion data sequences measured by joint angle sensors of a hand gesture input device and the status of the hand gesture while operating the chopsticks. Multiple regression analysis is a method of representing the relationship between the predictor variables and the criterion variable. Here, the joint angles measured by a hand gesture input device are used as the predictor variables, and the distance between the tips of the chopsticks is used to estimate the status of the task of using chopsticks.
Once the multiple regression formula is established for each user, this method should enable even users who cannot handle real chopsticks properly to operate the virtual chopsticks in a way that matches his/her mental image obtained from multiple joint angles, which are measured as a finger motion. Therefore, this formula is used to represent the basic chopsticks. With this method, exact calibration is not necessary, even for a group of users who have a variety of skeletal structures.
Generating sign language by using the userfs intuitive hand action might be a promising application. Application of the proposed method using multiple regression analysis is not limited to operatingl chopsticks. It can also be applied to the motion of other hand gestures and/or any body part that can be represented by a few parameters.