Voice augmented manipulation: using paralinguistic information to manipulate mobile devices

@inproceedings{Sakamoto2013VoiceAM,
  title={Voice augmented manipulation: using paralinguistic information to manipulate mobile devices},
  author={Daisuke Sakamoto and Takanori Komatsu and Takeo Igarashi},
  booktitle={MobileHCI '13},
  year={2013}
}
We propose a technique called voice augmented manipulation (VAM) for augmenting user operations in a mobile environment. This technique augments user interactions on mobile devices, such as finger gestures and button pressing, with voice. For example, when a user makes a finger gesture on a mobile phone and voices a sound into it, the operation will continue until stops making the sound or makes another finger gesture. The VAM interface also provides a button-based interface, and the function… Expand
Improving one-handed interaction with touchscreen smartphones
TLDR
The thesis finds that users prefer convenience over efficiency and confirms that they predominantly use one hand, and presents an approach to classify a user's finger with a high degree of accuracy using a single touch, which presents a thumb-optimised GUI that increases usability and efficiency of one-handed website operation. Expand
Advances on Breathing Based Text Input for Mobile Devices
TLDR
The advances achieved in this work are narrated from the outcomes of the implementation and experimentation of a mobile phone application that handles e.g. background noise by performing signal processing and a new keyboard layout. Expand
Multi-touch gestures in multimodal systems interaction among preschool children
Multi-touch gesture interactions have grown in popularity since the emergence of various types of tablets and smart phones. This simple and natural type of interaction has attracted users from allExpand
SoundCraft: Enabling Spatial Interactions on Smartwatches using Hand Generated Acoustics
TLDR
The algorithm, which is described, adopts from the MUltiple SIgnal Classification (MUSIC) technique, that enables robust localization and classification of the acoustics when the microphones are required to be placed at close proximity is described. Expand
Towards a wearable device for controlling a smartphone with eye winks
TLDR
EyeWink is an innovative hands- and voice-free wearable device that allows users to operate the smartphones with eye winks and can be widely used, with customers ranging from runners to people with severe disabilities. Expand
Operating a Robot by Nonverbal Voice Expressed with Acoustic Features
TLDR
These methods enable operators to operate multi-degrees of freedom simultaneously and operate a robot intuitively by nonverbal voice by associating the non verbal voice, tongue position and the coordinate of the robot’s hand. Expand
Whoosh: non-voice acoustics for low-cost, hands-free, and rapid input on smartwatches
TLDR
A recognition system capable of detecting non-voice events directed at and around the watch, including blows, sip-and-puff, and directional air swipes, without hardware modifications to the device is built. Expand
Voodle: Vocal Doodling to Sketch Affective Robot Motion
TLDR
It is found that users develop a personal language with Voodle; that a vocalization's meaning changed with narrative context; and that voodling imparts a sense of life to the robot, inviting designers to suspend disbelief and engage in a playful, conversational style of design. Expand
Talking to Teo: Video game supported speech therapy
TLDR
Talking to Teo is introduced, a video game developed and based on verbal therapy and educational objectives, aimed at the rehabilitation of children with early diagnosed hearing disability, and who use aids such as cochlear implants. Expand
Exploring Boundless Scroll by Extending Motor Space
TLDR
An empirically controlled study demonstrates the Boundless Scroll's many benefits, such as fewer clutching actions, occlusion-less content observations, and efficient and effortless off-screen acquisitions and also provided further design factors, implementations of theBoundless Scroll, and around-device interfaces. Expand
...
1
2
...

References

SHOWING 1-10 OF 37 REFERENCES
VoicePen: augmenting pen input with simultaneous non-linguisitic vocalization
TLDR
This paper presents a set of interaction techniques that leverage the combination of voice and pen input when performing both creative drawing and object manipulation tasks and suggests that with little training people can use non-linguistic vocalization to productively augment digital pen interaction. Expand
SideSight: multi-"touch" interaction around small devices
TLDR
A prototype device with infra-red proximity sensors embedded along each side and capable of detecting the presence and position of fingers in the adjacent regions is described, which gives a larger input space than would otherwise be possible which may be used in conjunction with or instead of on-display touch input. Expand
Experimental analysis of touch-screen gesture designs in mobile environments
TLDR
It is found that in the presence of environmental distractions, gestures can offer significant performance gains and reduced attentional load, while performing as well as soft buttons when the user's attention is focused on the phone. Expand
Voicedraw: a hands-free voice-driven drawing application for people with motor impairments
TLDR
VoiceDraw is presented, a voice-driven drawing application for people with motor impairments that provides a way to generate free-form drawings without needing manual interaction and offers insights for mapping human voice to continuous control. Expand
"Move the couch where?" : developing an augmented reality multimodal interface
TLDR
An augmented reality (AR) multimodal interface that uses speech and paddle gestures for interaction that allows users to intuitively arrange virtual furniture in a virtual room using a combination of speech and gestures from a real paddle is described. Expand
Speech augmented multitouch interaction patterns
TLDR
This paper introduces design patterns which support developers in exploiting the possibilities of combined voice and touch interaction for newly developed systems, so that interaction with these systems becomes more natural for the respective end users. Expand
Gesture search: a tool for fast mobile data access
TLDR
Gesture Search contributes a unique way of combining gesture-based interaction and search for fast mobile data access, and demonstrates a novel approach for coupling gestures with standard GUI interaction. Expand
Multimodal User Input to Supervisory Control Systems: Voice-Augmented Keyboard
TLDR
Current moderately priced voice recognition systems are an inappropriate human-computer interaction technology in supervisory control systems, according to experimental results. Expand
Voice as sound: using non-verbal voice input for interactive control
TLDR
It is suggested that voice-as-sound techniques can enhance traditional voice recognition approach and achieve more direct, immediate interaction by using lower-level features of voice such as pitch and volume. Expand
GraspZoom: zooming and scrolling control model for single-handed mobile interaction
TLDR
A single-handed UI scheme "GraspZoom": multi-state input model using pressure sensing, which enables intuitive and continuous zooming and scrolling and using tiny thumb gesture input along with this pressure sensing method achieves bi-directional operations. Expand
...
1
2
3
4
...