Monday, April 11, 2011

Paper Reading #21 - A Multimodal Labeling Interface for Wearable Computing

Comments

Steven

Adam


Reference

A Multimodal Labeling Interface for Wearable Computing

Shanqing Li, Yunde Jia

IUI 2010 - Hong Kong


Summary

Using wearable computers poses an interesting question when it comes to user input. This paper examines the idea of labeling objects using a wearable computer system without the use of a keyboard and mouse. The system is equipped with a stereo camera and speech recognition capabilities. Using these two technologies the user can indicate a region of interest with a gesture recognized by the vision system. The user can then utter a word to describe the selected real world object and thereby label it for future recognition by the system. Studies indicated the system showed a large speed boost from traditional keyboard and mouse entry. Performance was best on easily distinguishable large objects.


Analysis

Well mission accomplished if you want to wear a bulky computer system and label things in well lit low noise environments. Otherwise, this technology is probably not quite ready for you. The paper addresses valid concerns about input devices for wearable computers, but the good results are hard to believe. Speech recognition and vision are two very difficult areas, especially when users are in uncontrolled environments with noise and varying lighting conditions. Wearable computing is supposedly emerging as a convenience, but right now these systems seem impractical for daily use. The gesturing concept presented in this paper seems logical, but the wearable computer seems a bit imposing. Overall the paper presents a great prototype but isn’t quite ready for mainstream use.

No comments:

Post a Comment