HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users

  title={HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users},
  author={Dhruv Jain and Kelly M. Mack and Akli Amrous and Matt Wright and Steven M. Goodman and Leah Findlater and Jon Froehlich},
  journal={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
  • D. Jain, K. Mack, Jon Froehlich
  • Published 21 April 2020
  • Computer Science
  • Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
We introduce HomeSound, an in-home sound awareness system for Deaf and hard of hearing (DHH) users. Similar to the Echo Show or Nest Hub, HomeSound consists of a microphone and display, and uses multiple devices installed in each home. We iteratively developed two prototypes, both of which sense and visualize sound information in real-time. Prototype 1 provided a floorplan view of sound occurrences with waveform histories depicting loudness and pitch. A three-week deployment in four DHH homes… 

Figures and Tables from this paper

Wearable Subtitles: Augmenting Spoken Communication with Lightweight Eyewear for All-day Captioning
Wearable Subtitles is a lightweight 3D-printed proof-of-concept HWD that explores augmenting communication through sound transcription for a full workday using a low-power microcontroller architecture and addresses critical challenges for the adoption of HWDs.
ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users
ProtoSound is introduced, an interactive system for customizing sound recognition models by recording a few examples, thereby enabling personalized and fine-grained categories and discussing open challenges in personalizable sound recognition, including the need for better recording interfaces and algorithmic improvements.
Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing
To better understand how DHH users can drive personalization of their own assistive sound recognition tools, a three-part study with 14 DHH participants highlights a positive subjective experience when recording and interpreting training data in situ, but uncovers several key pitfalls unique to D HH users.
Field study of a tactile sound awareness device for deaf users
A wearable tactile technology to provide sound feedback to DHH people is explored and it is reported that participants reported that their device increased awareness of sounds by conveying actionable cues and 'experiential' sound information.
HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display
HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription is introduced.
A Taxonomy of Sounds in Virtual Reality
A novel taxonomy for VR sounds was able to successfully categorize nearly all sounds in these apps and uncovered additional insights for designing accessible visual and haptic-based sound substitutes for DHH users.
SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users
A performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch-+phone+cloud, and watch+cloud finds that the watch+phones architecture provided the best balance between CPU, memory, network usage, and classification latency.
Let's Read: designing a smart display application to support CODAs when learning spoken language
A proposal for a smart display application called Let's Read that aims to support hearing children of Deaf adults when learning spoken language and a heuristic evaluation to improve the proposed prototype is presented.
Conversational greeting detection using captioning on head worn displays versus smartphones
Preliminary findings from three hearing participants wearing sound masking headphones and performing a mobile task suggest that a HWD display may be faster than, and preferred to, a smartphone for displaying captions for attending to one's name being called.


Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing
A general interest in smarthome-based sound awareness systems is suggested particularly for displaying contextually aware, personalized and glanceable visualizations but key concerns arose related to privacy, activity tracking, cognitive overload, and trust.
Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies
Findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users
A mobile phone app that alerts deaf and hard-of-hearing people to sounds they care about is designed, and the viability of a basic machine learning algorithm for sound detection is explored.
Design and evaluation of a smartphone application for non-speech sound awareness for people with hearing loss
  • M. Mielke, R. Brück
  • Physics
    2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2015
A flexible and mobile assistive device basing on a smartphone that detects and recognizes acoustic events by analysing the acoustic environment of the user and using pattern recognition algorithms the user can define the sounds that should be recognised by the device.
Evaluating non-speech sound visualizations for the deaf
An iterative investigation of peripheral, visual displays of ambient sounds, providing valuable information about the sound awareness needs of the deaf and can help to inform further design of such applications.
"Accessibility Came by Accident": Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities
Examining the accessibility of off-the-shelf IPAs and how users with disabilities are making use of these devices shows that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers.
Towards More Robust Speech Interactions for Deaf and Hard of Hearing Users
A better understanding of the challenges of deaf speech recognition is contributed and insights for future system development are provided, including the potential for groups to collectively exceed the performance of individuals.
Supporting Rhythm Activities of Deaf Children using Music-Sensory-Substitution Systems
Investigating how a visual and vibrotactile music-sensory-substitution device, MuSS-Bits++, affects rhythm discrimination, reproduction, and expressivity of deaf people found that most participants felt more confident wearing the device in vibration mode even when it did not objectively improve their accuracy.
Field trials of a tactile acoustic monitor for the profoundly deaf.
No significant overall improvement in subject's control of voice level was observed, although some subjects found that having a voice level monitor gave them greater confidence to join conversations.
The smart house for older persons and persons with physical disabilities: structure, technology arrangements, and perspectives
This paper analyzes the building blocks of smart houses, with particular attention paid to the health monitoring subsystem as an important component, by addressing the basic requirements of various sensors implemented from both research and clinical perspectives.