
I recently read “Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes”, a UIST ’07 paper by Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. The paper describes a problem facing many interface designers today, which is the complexity and exclusiveness of gesture recognition software. They designed a cheap, easy solution, which can be implemented anywhere by anyone with around 100 lines of code. The following is a list of the requirements that they set for the new software:
1. be resilient to variations in sampling due to movement speed or sensing;
2. support optional and configurable rotation, scale, and position invariance;
3. require no advanced mathematical techniques (e.g., matrix inversions, derivatives, integrals);
4. be easily written in few lines of code;
5. be fast enough for interactive purposes (no lag);
6. allow developers and application end-users to “teach” it new gestures with only one example;
7. return an N-best list with sensible [0..1] scores that are independent of the number of input points;
8. provide recognition rates that are competitive with more complex algorithms previously used in HCI to recognize the types of gestures shown in Figure 1.

Images from “Gestures without Libraries, Toolkits or Training: A $1 Recognizer for User Interface Prototypes” (Image1 from page 1) (Image2 from page4)
Seems like a really simple implementation if it only takes 100 lines of code. Since a gesture has to be compared to all of the templates, I would think this method may be a little on the slow side if you were using a ton of templates.
ReplyDelete