HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display

  title={HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display},
  author={Ru Guo and Yiru Yang and Johnson Kuang and Xue Bin and Dhruv Jain and Steven M. Goodman and Leah Findlater and Jon Froehlich},
  journal={The 22nd International ACM SIGACCESS Conference on Computers and Accessibility},
  • Ru Guo, Yiru Yang, Jon Froehlich
  • Published 26 October 2020
  • Physics
  • The 22nd International ACM SIGACCESS Conference on Computers and Accessibility
Head-mounted displays can provide private and glanceable speech and sound feedback to deaf and hard of hearing people, yet prior systems have largely focused on speech transcription. We introduce HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription. This poster paper presents a working proof-of-concept prototype, and discusses future opportunities for advancing AR… 

Figures from this paper

Towards Sound Accessibility in Virtual Reality
This paper provides a first comprehensive investigation of sound accessibility in VR, including a design space for developing visual and haptic substitutes of VR sounds to support DHH users and prototypes illustrating several points within the design space.
Immersive Inclusivity at CHI: Design and Creation of Inclusive User Interactions Through Immersive Media
The aim of this workshop is to create a discussion platform on intersections between the fields of immersive media, accessibility, and human-computer interaction, outline the key current and future problems of immersive inclusive design, and define a set of methodologies for design and evaluation of immersive systems from inclusivity perspective.


Deaf, Hard of Hearing, and Hearing Perspectives on Using Automatic Speech Recognition in Conversation
The report discusses the most common use cases, their challenges, and best practices plus pitfalls to avoid in using personal devices with ASR for commands or conversation.
Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing
This paper design and evaluate visualizations for spatially locating sound on a head-mounted display (HMD) and developed eight high-level visual sound feedback dimensions, reaffirm past work on challenges faced by persons with hearing loss in group conversations, provide support for the general idea of sound awareness visualizations on HMDs, and reveal preferences for specific design options.
VisAural:: a wearable sound-localisation device for people with impaired hearing
VisAural is a system that converts audible signals into visual cues, using an array of head-mounted microphones, and places LEDs at the periphery of the user's visual field to guide them to the source of the sound.
SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group Conversations
SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display, demonstrated that DHH participants preferred the authors' prototype over traditional captions for group conversations, and significantly preferred speechbubble visualizations overTraditional captions.
A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users
A mobile phone app that alerts deaf and hard-of-hearing people to sounds they care about is designed, and the viability of a basic machine learning algorithm for sound detection is explored.
HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users
HomeSound, an in-home sound awareness system for Deaf and hard of hearing (DHH) users, consists of a microphone and display, and uses multiple devices installed in each home, similar to the Echo Show or Nest Hub.
Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies
Findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users
A performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch-+phone+cloud, and watch+cloud finds that the watch+phones architecture provided the best balance between CPU, memory, network usage, and classification latency.
Evaluating Smartwatch-based Sound Feedback for Deaf and Hard-of-hearing Users Across Contexts
The findings characterize uses for vibration in multimodal sound awareness, both for push notification and for immediately actionable sound information displayed through vibrational patterns (tactons) in smartwatch feedback techniques.
Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing
Results show that, while walking, HMD captions can support communication access and improve attentional balance between the speakers(s) and navigating the environment.