by
A. Chris Long, Jr. , James A. Landay and Lawrence A. RoweSummary
This paper presents quill, an application that aids interface designers in determining if gestures are similar. The tool is also capable of advising designers on how to makes their gestures less similar, not only from a users view but also gestures that a computer might confuse as similar.The researchers conducted three user studies to determine a model of gesture similarity. In the first two 42 participants choose the most different gesture from a group of three; in the third study 266 participants judged the similarity of pairs of gestures on an absolute scale.
The goals of quill is to allow designers with no advance knowledge of gesture recognition to create gestures that a computer can recognize, but won't confuse the user as too similar. The quill system learns gestures using the Rubine algorithm, and a training set provided by the user. Quill analyzes the gestures in the background and provides feedback if those gestures are too similar to other gesture classes. The paper goes on to discuss concerns about when to give feedback, and how much feedback to provide. The paper also discusses implementation concerns, particularly how to lock the user's actions so analysis can be completed.
The researchers not that their models to overestimate gesture similarity when gestures were or contained a letter. The author also propose that future work could include automatically repair similar gestures by morphing them into a new form.
2 comments:
I think that you describe a serious difficulty in sketch interaction. How do you maintain distinctions necessary for high recognition rates without sacrificing familiarity and consistency to the user?
I kind of agree. I think Quill is useful in the sense that it will match physically similar things, but who knows what the designer or the user is thinking? Some objects are VERY similar yet we can tell the difference between them because of instinct or just because we've seen them so many times.
Post a Comment