Microsoft researchers are working on ways to use the Kinect’s body-reading capabilities to understand sign language inputs.
Microsoft Research Asia and the Chinese Academy of Sciences’ Institute of Computing Technology has collaborated to use the Kinect to help with computer sign-language recognition. As shown in the video above, they’ve demonstrated the Microsoft Kinect’s ability to translate signs by tracking hand and body actions.
Microsoft Research says in “translation mode” the Kinect can translate sign language into text or speech. Both isolated word recognition and continuous sentence recognition were tested. In “communications mode,” an avatar can help an individual who is deaf or hard-of-hearing communicate with someone who can hear by converting the sign inputs into text and vice-versa.
It all works through a process Microsoft Research refers to as “3D trajectory matching:” Kinect for Windows software helps decipher the hand movements, which are then matched to identify a word.
This Kinect research initiative could bring improved and more natural accessibility options for those who primarily communicate with sign language.
“We believe that IT [information technology] should be used to improve daily life for all persons,” Guobin Wu, a research program manager from Microsoft Research Asia, wrote in an official blog post.
“While it is still a research project, we ultimately hope this work can provide a daily interaction tool to bridge the gap between the hearing and the deaf and hard of hearing in the near future. To read more about this sign language recognition project, the researchers summarized their findings in a paper (PDF).