Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes

@article{Wobbrock2007GesturesWL,
  title={Gestures without libraries, toolkits or training: a \$1 recognizer for user interface prototypes},
  author={J. Wobbrock and Andrew D. Wilson and Yang Li},
  journal={Proceedings of the 20th annual ACM symposium on User interface software and technology},
  year={2007}
}
Although mobile, tablet, large display, and tabletop computers increasingly present opportunities for using pen, finger, and wand gestures in user interfaces, implementing gesture recognition largely has been the privilege of pattern matching experts, not user interface prototypers. [...] Key Result These results were nearly identical to DTW and superior to Rubine.Expand
Features , Regions , Gestures : Components of a Generic Gesture Recognition Engine
In recent years, research in novel types of human-computer interaction, for example multi-touch or tangible interfaces, has increased considerably. Although a large number of innovative applicationsExpand
GestureWiz: A Human-Powered Gesture Design Environment for User Interface Prototypes
TLDR
The GestureWiz prototyping environment is presented that provides designers with an integrated solution for gesture definition, conflict checking, and real-time recognition by employing human recognizers in a Wizard of Oz manner and can perform with reasonable accuracy and latency. Expand
Gestures as point clouds: a $P recognizer for user interface prototypes
TLDR
P, a new member of the $-family, that remedies this limitation by considering gestures as clouds of points by delivering >99% accuracy in user-dependent testing with 5+ training samples per gesture type and stays above 99% for user-independent tests when using data from 10 participants. Expand
A lightweight multistroke recognizer for user interface prototypes
TLDR
This work presents $N, a lightweight, concise multistroke recognizer that uses only simple geometry and trigonometry that is a significant extension to the $1 unistrok recognizer, which has seen quick uptake in prototypes but has key limitations. Expand
Towards a unified gesture description language
TLDR
A generic and extensible formal language to describe gestures is presented, which enables the development of a generic gesture recognition engine which can serve as a backend to a wide variety of user interfaces. Expand
Assignment 3 – Extending the $ N Recognizer CAP 6105 Due : 10 / 14 / 11 11 : 59 pm
With the expansion of penand touch-based computing, new user interface prototypes may incorporate stroke gestures. Many gestures comprise multiple strokes, but building state-of-the-art multistrokeExpand
Concepture: a regular language based framework for recognizing gestures with varying and repetitive patterns
TLDR
Concepture, a framework based on regular language grammars for the authoring and recognition of sketched gestures with infinitely varying and repetitive patterns, is presented and shown to be effective in efficiently authoring a number of common, yet difficult to recognize gestures. Expand
G-Gene
TLDR
G-Gene is established, a method for transforming compositional stroke gesture definitions into profile Hidden Markov Models (HMMs), able to provide both a good accuracy and information on gesture sub-parts, which requires less time and effort for creating guidance systems with respect to common gesture classification approaches. Expand
MAGIC summoning: towards automatic suggesting and testing of gestures with low probability of false positives during use
TLDR
The tool MAGIC Summoning exploits the SAX representation of the EGL to suggest gestures with a low likelihood of false triggering, and demonstrates MAGIC's effectiveness in gesture selection and helpfulness in creating accurate gesture recognizers. Expand
Gesture coder: a tool for programming multi-touch gestures by demonstration
TLDR
Gesture Coder is presented, which by learning from a few examples given by the developer automatically generates code that recognizes multi-touch gestures, tracks their state changes and invokes corresponding application actions, and can be improved with higher quality and more training data. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
Specifying gestures by example
TLDR
GRANDMA, a toolkit for rapidly adding gestures to direct manipulation interfaces, and the trainable single-stroke gesture recognizer used by GRANDMA are described. Expand
Integrating gesture and snapping into a user interface toolkit
TLDR
Artkit provides an extensible input model which is designed to support a wider range of interaction techniques than conventional user interface toolkits, and supports the implementation of interaction objects using dragging, snapping, and gesture inputs. Expand
AppLens and launchTile: two designs for one-handed thumb use on small devices
We present two interfaces to support one-handed thumb use for PDAs and cell phones. Both use Scalable User Interface (ScUI) techniques to support multiple devices with different resolutions andExpand
Stitching: pen gestures that span multiple displays
TLDR
The general requirements of stitching are identified and a prototype photo sharing application is described that uses stitching to allow users to copy images from one tablet to another that is nearby, expand an image across multiple screens, establish a persistent shared workspace, or use one Tablet to present images that a user selects from another tablet. Expand
Implications for a gesture design tool
TLDR
The experiment confirmed that gesture design is very difficult and suggested several ways in which current tools can be improved, the most important improvement being to make the tools more active and provide more guidance for designers. Expand
SHARK2: a large vocabulary shorthand writing system for pen-based computers
TLDR
The architecture, algorithms and interfaces of a high-capacity multi-channel pen-gesture recognition system that supports a gradual and seamless transition from visually guided tracing to recall-based gesturing are designed and implemented. Expand
A new gesture recognition algorithm and segmentation method of Korean scripts for gesture-allowed ink editor
TLDR
This paper designed eleven gestures, presented a new feature-based recognition algorithm to identify gestures, and proposed a method for segmenting ink data which mainly consists of Korean scripts into logical units and implemented GesEdit, the gesture-allowed ink editor on PDAs using the proposed methods. Expand
Evaluation of an on-line adaptive gesture interface with command prediction
TLDR
An evaluation of a hybrid gesture interface framework that combines on-line adaptive gesture recognition with a command predictor shows that the hybrid adaptive system significantly improved overall gesture recognition performance, and reduced users' need to practice making the gestures before achieving good results. Expand
Hidden Markov Model Symbol Recognition for Sketch-Based Interfaces
TLDR
This paper describes a recognition strategy based on HMMs and includes recognition results on twelve sketched symbols and has successfully applied this methodology to a PDA sketch-based interface to control a team of robots. Expand
SATIN: a toolkit for informal ink-based applications
TLDR
SATIN is a Java-based toolkit designed to support the creation of applications that leverage the informal nature of pens, including a scenegraph for manipulating and rendering objects and compatibility with Java’s Swing toolkit. Expand
...
1
2
3
4
...