Patterns for How Users Overcome Obstacles in Voice User Interfaces

  title={Patterns for How Users Overcome Obstacles in Voice User Interfaces},
  author={Chelsea M. Myers and Anushay Furqan and Jessica Nebolsky and Karina Caro and Jichen Zhu},
  journal={Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems},
Voice User Interfaces (VUIs) are growing in popularity. [] Key Result We also found patterns that suggest participants were more likely to employ a "guessing" approach rather than rely on visual aids or knowledge recall.

Figures from this paper

Modeling Behavior Patterns with an Unfamiliar Voice User Interface

It is found user behavior can be grouped into three clusters: people who become proficient with the system and typically stay proficient while completing different tasks, people who exhibit an exploratory approach to completing tasks, and people who struggled to complete tasks.

The Impact of User Characteristics and Preferences on Performance with an Unfamiliar Voice User Interface

The impact of user characteristics and preferences on how users interact with a VUI-based calendar, DiscoverCal, is analyzed by analyzing both VUI usage data and self-reported data to observe correlations between both data types.

Understanding Differences between Heavy Users and Light Users in Difficulties with Voice User Interfaces

It is found that heavy users could identify more diverse difficulty types than light users; the types of difficulties that affect each group of users are different, and in particular, the repetition of agent utterance was considered the most inconvenient by heavy users.

“Try, Try, Try Again:” Sequence Analysis of User Interaction Data with a Voice User Interface

Using sequence analysis techniques to reveal the patterns of tactics the authors' participants used when interacting with an unfamiliar multi-modal VUI indicates participants initially struggled with understanding the acceptable utterance structure and entities more so than utterance keywords.

Different Types of Voice User Interface Failures May Cause Different Degrees of Frustration

An investigation into how different types of failures in a voice user interface (VUI) affects user frustration, which identified three major failure types as perceived by the users, namely, Reason Unknown, Speech Misrecognition, and Utterance Pattern Match Failure.

What Can I Say?: Effects of Discoverability in VUIs on Task Performance and User Experience

While no significant differences were found between the strategies, a majority of the participants highlighted their preference for the 'What Can I Say?' strategy if they were to use the VUI more frequently, suggesting designers should consider the use of a discoverability strategy in the design of VUIs.

KaraokeVUI: Utilizing Karaoke Subtitles for Voice User Interfaces to Navigate Users What They Would Say

KaraokeVUI is proposed, which is a VUI help tool for supporting voice operation with feedback by displaying spoken words through filling in the blanks or overlaying on phrases, just like on a karaoke screen, usefulness and usability are evaluated.

Adaptive suggestions to increase learnability for voice user interfaces

This research focuses on adapting a VUI's spoken feedback to suggest verbal commands to users encountering errors, and guides users to learn what verbal commands execute VUI actions to accomplish desired tasks with the system.

Adaptable Utterances in Voice User Interfaces to Increase Learnability

This work proposes adaptable verbal commands, termed adaptable utterances, and Open User Models (OUMs) as a method to allow customization of a VUI's commands to match the individual user’s preference.

“You, Move There!”: Investigating the Impact of Feedback on Voice Control in Virtual Environments

It is found that the type of feedback given by agents is critical to user experience, and specifically auditory mechanisms are preferred, allowing users to engage with other modalities seamlessly during interaction.



What can I say?: addressing user experience challenges of a mobile voice user interface for accessibility

This paper addresses long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands and offers a set of implications for the design of M-VUIs.

The role of spoken feedback in experiencing multimodal interfaces as human-like

It is shown that users' views and preferences lean significantly towards anthropomorphism after actually experiencing the multimodal timetable system, and that in order to appreciate a human-like interface, the users have to experience it.

"Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents

This paper reports the findings of interviews with 14 users of CAs in an effort to understand the current interactional factors affecting everyday use, and finds user expectations dramatically out of step with the operation of the systems.

Learnability through Adaptive Discovery Tools in Voice User Interfaces

This paper designed DiscoverCal, a calendar application designed using adaptive discovery tools to improve learnability in VUIs, and presents the design of a VUI that adapts based on contextual relevance and user performance in order to extend learnability beyond initial use.

Managing Uncertainty in Time Expressions for Virtual Assistants

This paper explores existing practices, expectations, and preferences surrounding the use of ITEs, and finds that people frequently use a diverse set of ITE in both communication and planning, and has a variety of expectations about time input and management when interacting with virtual assistants.

The limits of speech recognition

By understanding the cognitive processes surrounding human “acoustic memory” and processing, interface designers may be able to integrate speech more effectively and guide users more successfully.

"Alexa is my new BFF": Social Roles, User Satisfaction, and Personification of the Amazon Echo

Results indicate marked variance in how people refer to the device, with over half using the personified name Alexa but most referencing the device with object pronouns, and personification predicts user satisfaction with the Echo.

Speech versus Mouse Commands for Word Processing: An Empirical Evaluation

Evidence is provided for the utility of speech input for command activation in application programs when the keyboard is used for text entry and the mouse for direct manipulation.

JustSpeak: enabling universal voice control on Android

JustSpeak enables system wide voice control on Android that can accommodate any application and provides more efficient and natural interaction with support of multiple voice commands in the same utterance.

Patterns of entry and correction in large vocabulary continuous speech recognition systems

Details of the kinds of usability and system design problems likely in current systems and several common patterns of error correction that are found are presented.