Patterns for How Users Overcome Obstacles in Voice User Interfaces

@article{Myers2018PatternsFH,
  title={Patterns for How Users Overcome Obstacles in Voice User Interfaces},
  author={Chelsea M. Myers and Anushay Furqan and Jessica Nebolsky and Karina Caro and Jichen Zhu},
  journal={Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
  year={2018}
}
Voice User Interfaces (VUIs) are growing in popularity. [...] Key Result We also found patterns that suggest participants were more likely to employ a "guessing" approach rather than rely on visual aids or knowledge recall.Expand
Modeling Behavior Patterns with an Unfamiliar Voice User Interface
TLDR
It is found user behavior can be grouped into three clusters: people who become proficient with the system and typically stay proficient while completing different tasks, people who exhibit an exploratory approach to completing tasks, and people who struggled to complete tasks. Expand
The Impact of User Characteristics and Preferences on Performance with an Unfamiliar Voice User Interface
TLDR
The impact of user characteristics and preferences on how users interact with a VUI-based calendar, DiscoverCal, is analyzed by analyzing both VUI usage data and self-reported data to observe correlations between both data types. Expand
Understanding Differences between Heavy Users and Light Users in Difficulties with Voice User Interfaces
TLDR
It is found that heavy users could identify more diverse difficulty types than light users; the types of difficulties that affect each group of users are different, and in particular, the repetition of agent utterance was considered the most inconvenient by heavy users. Expand
“Try, Try, Try Again:” Sequence Analysis of User Interaction Data with a Voice User Interface
TLDR
Using sequence analysis techniques to reveal the patterns of tactics the authors' participants used when interacting with an unfamiliar multi-modal VUI indicates participants initially struggled with understanding the acceptable utterance structure and entities more so than utterance keywords. Expand
Different Types of Voice User Interface Failures May Cause Different Degrees of Frustration
TLDR
An investigation into how different types of failures in a voice user interface (VUI) affects user frustration, which identified three major failure types as perceived by the users, namely, Reason Unknown, Speech Misrecognition, and Utterance Pattern Match Failure. Expand
What Can I Say?: Effects of Discoverability in VUIs on Task Performance and User Experience
TLDR
While no significant differences were found between the strategies, a majority of the participants highlighted their preference for the 'What Can I Say?' strategy if they were to use the VUI more frequently, suggesting designers should consider the use of a discoverability strategy in the design of VUIs. Expand
Adaptive suggestions to increase learnability for voice user interfaces
TLDR
This research focuses on adapting a VUI's spoken feedback to suggest verbal commands to users encountering errors, and guides users to learn what verbal commands execute VUI actions to accomplish desired tasks with the system. Expand
Adaptable Utterances in Voice User Interfaces to Increase Learnability
TLDR
This work proposes adaptable verbal commands, termed adaptable utterances, and Open User Models (OUMs) as a method to allow customization of a VUI's commands to match the individual user’s preference. Expand
“You, Move There!”: Investigating the Impact of Feedback on Voice Control in Virtual Environments
TLDR
It is found that the type of feedback given by agents is critical to user experience, and specifically auditory mechanisms are preferred, allowing users to engage with other modalities seamlessly during interaction. Expand
Reading Between the Guidelines: How Commercial Voice Assistant Guidelines Hinder Accessibility for Blind Users
TLDR
A qualitative document review of VAPA design guidelines published by top commercial vendors Amazon, Google, Microsoft, Apple and Alibaba found that guidelines have many commonalities that surface an underlying assumption that V APA interfaces should be modeled after human-human conversation. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 23 REFERENCES
What can I say?: addressing user experience challenges of a mobile voice user interface for accessibility
TLDR
This paper addresses long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands and offers a set of implications for the design of M-VUIs. Expand
The role of spoken feedback in experiencing multimodal interfaces as human-like
TLDR
It is shown that users' views and preferences lean significantly towards anthropomorphism after actually experiencing the multimodal timetable system, and that in order to appreciate a human-like interface, the users have to experience it. Expand
"Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents
TLDR
This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use, and finds user expectations dramatically out of step with the operation of the systems. Expand
Learnability through Adaptive Discovery Tools in Voice User Interfaces
TLDR
This paper designed DiscoverCal, a calendar application designed using adaptive discovery tools to improve learnability in VUIs, and presents the design of a VUI that adapts based on contextual relevance and user performance in order to extend learnability beyond initial use. Expand
Managing Uncertainty in Time Expressions for Virtual Assistants
TLDR
This paper explores existing practices, expectations, and preferences surrounding the use of ITEs, and finds that people frequently use a diverse set of ITE in both communication and planning, and has a variety of expectations about time input and management when interacting with virtual assistants. Expand
The limits of speech recognition
TLDR
By understanding the cognitive processes surrounding human “acoustic memory” and processing, interface designers may be able to integrate speech more effectively and guide users more successfully. Expand
"Alexa is my new BFF": Social Roles, User Satisfaction, and Personification of the Amazon Echo
TLDR
Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns, and personification predicts user satisfaction with the Echo. Expand
Speech versus Mouse Commands for Word Processing: An Empirical Evaluation
TLDR
Evidence is provided for the utility of speech input for command activation in application programs when the keyboard is used for text entry and the mouse for direct manipulation. Expand
JustSpeak: enabling universal voice control on Android
TLDR
JustSpeak enables system wide voice control on Android that can accommodate any application and provides more efficient and natural interaction with support of multiple voice commands in the same utterance. Expand
Patterns of entry and correction in large vocabulary continuous speech recognition systems
TLDR
Details of the kinds of usability and system design problems likely in current systems and several common patterns of error correction that are found are presented. Expand
...
1
2
3
...