Monday, September 3, 2007

"Those Look Similar!" Issues in Automating Gesture Design Advice

by

A. Chris Long, Jr. , James A. Landay and Lawrence A. Rowe

Summary

This paper presents quill, an application that aids interface designers in determining if gestures are similar. The tool is also capable of advising designers on how to makes their gestures less similar, not only from a users view but also gestures that a computer might confuse as similar.

The researchers conducted three user studies to determine a model of gesture similarity. In the first two 42 participants choose the most different gesture from a group of three; in the third study 266 participants judged the similarity of pairs of gestures on an absolute scale.

The goals of quill is to allow designers with no advance knowledge of gesture recognition to create gestures that a computer can recognize, but won't confuse the user as too similar. The quill system learns gestures using the Rubine algorithm, and a training set provided by the user. Quill analyzes the gestures in the background and provides feedback if those gestures are too similar to other gesture classes. The paper goes on to discuss concerns about when to give feedback, and how much feedback to provide. The paper also discusses implementation concerns, particularly how to lock the user's actions so analysis can be completed.

The researchers not that their models to overestimate gesture similarity when gestures were or contained a letter. The author also propose that future work could include automatically repair similar gestures by morphing them into a new form.

Discussion

Just thinking about the fact that letters were deemed to be similar raises an interesting thought. We view letters as dissimilar because of the metaphor attached to the shape (a sound), but the shapes of letter do have a lot of similarity ("u" and "v", "m" and "n"). Many times when looking at peoples handwritten notes the only way I could determine an individual letter was in the context of a word. The question becomes how similar do we want gestures to be so that users can mentally group them as being in a category (all editing strokes), versus making them dissimilar enough that they can be recognized and drawn easily? For example the cut/paste gesture needs to convey similarity (both editing gestures), but at the same time be inherently opposite. This seems like a difficult thing to accomplish. Sure, quill can tell if gestures are similar, but I don't think it can help in making a metaphorical connection between gestures and the actions they are suppose to produce.

Citation

Long, A. C., Landay, J. A., and Rowe, L. A. 2001. "Those look similar!" issues in automating gesture design advice. In Proceedings of the 2001 Workshop on Perceptive User interfaces (Orlando, Florida, November 15 - 16, 2001). PUI '01, vol. 15. ACM Press, New York, NY, 1-5. DOI= http://doi.acm.org/10.1145/971478.971510

2 comments:

rg said...

I think that you describe a serious difficulty in sketch interaction. How do you maintain distinctions necessary for high recognition rates without sacrificing familiarity and consistency to the user?

Miqe said...

I kind of agree. I think Quill is useful in the sense that it will match physically similar things, but who knows what the designer or the user is thinking? Some objects are VERY similar yet we can tell the difference between them because of instinct or just because we've seen them so many times.