SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users

  title={SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users},
  author={Dhruv Jain and Hung Ngo and Pratyush Patel and Steven M. Goodman and Leah Findlater and Jon Froehlich},
  journal={The 22nd International ACM SIGACCESS Conference on Computers and Accessibility},
  • D. Jain, Hung Ngo, Jon Froehlich
  • Published 26 October 2020
  • Computer Science
  • The 22nd International ACM SIGACCESS Conference on Computers and Accessibility
Smartwatches have the potential to provide glanceable, always-available sound feedback to people who are deaf or hard of hearing. In this paper, we present a performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch+phone+cloud, and watch+cloud. While direct comparison with prior work is challenging, our results show that the best model, VGG-lite… 

Figures and Tables from this paper

ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users
ProtoSound is introduced, an interactive system for customizing sound recognition models by recording a few examples, thereby enabling personalized and fine-grained categories and discussing open challenges in personalizable sound recognition, including the need for better recording interfaces and algorithmic improvements.
Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing
To better understand how DHH users can drive personalization of their own assistive sound recognition tools, a three-part study with 14 DHH participants highlights a positive subjective experience when recording and interpreting training data in situ, but uncovers several key pitfalls unique to D HH users.
HoloSound: Combining Speech and Sound Identification for Deaf or Hard of Hearing Users on a Head-mounted Display
HoloSound, a HoloLens-based augmented reality (AR) prototype that uses deep learning to classify and visualize sound identity and location in addition to providing speech transcription is introduced.
Accessing Passersby Proxemic Signals through a Head-Worn Camera: Opportunities and Limitations for the Blind
Analysis of data collected in a study with blind and sighted participants provides insights into dyadic behaviors for assistive pedestrian detection and lead to implications for the design of future head-worn cameras and interactions.
A Taxonomy of Sounds in Virtual Reality
A novel taxonomy for VR sounds was able to successfully categorize nearly all sounds in these apps and uncovered additional insights for designing accessible visual and haptic-based sound substitutes for DHH users.
Towards Sound Accessibility in Virtual Reality
This paper provides a first comprehensive investigation of sound accessibility in VR, including a design space for developing visual and haptic substitutes of VR sounds to support DHH users and prototypes illustrating several points within the design space.
Designing mobile spatial navigation systems from the user’s perspective: an interdisciplinary review
Making mobile navigation systems more accessible and multimodal, which will make the systems more inclusive and usable for all types of users is suggested.
Overview of ASSETS 2020
This year's ASSETS conference set a new attendance record with 395 attendees from 29 countries from all continents across the globe and continued its tradition of presenting innovative research on mainstream and specialized assistive technologies, accessible computing, and assistive application.
Projects with allocated PhD studentships Algorithms and Data Analysis
  • Computer Science
  • 2020
This project aims to define formal generic models of administrative access control, based on the Category-Based Meta Model of access control (CBAC), which can be used to analyse access control systems and help identify the impact of changes made by administrators (impact change) on the overall security of the system.
SoundWatch: Deep Learning for Sound Accessibility on Smartwatches
It is found that the best model, VGG-lite, performed similar to the state of the art for nonportable devices although requiring substantially less memory and that the watch+phone architecture provided the best balance among CPU, memory, network usage, and latency.


UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones
UbiEar, a smartphone-based acoustic event sensing and notification system, is designed and shown that UbiEar can assist the young DHH students in awareness of important acoustic events in their daily life.
A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users
A mobile phone app that alerts deaf and hard-of-hearing people to sounds they care about is designed, and the viability of a basic machine learning algorithm for sound detection is explored.
Design and evaluation of a smartphone application for non-speech sound awareness for people with hearing loss
  • M. Mielke, R. Brück
  • Physics
    2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2015
A flexible and mobile assistive device basing on a smartphone that detects and recognizes acoustic events by analysing the acoustic environment of the user and using pattern recognition algorithms the user can define the sounds that should be recognised by the device.
Evaluating Smartwatch-based Sound Feedback for Deaf and Hard-of-hearing Users Across Contexts
The findings characterize uses for vibration in multimodal sound awareness, both for push notification and for immediately actionable sound information displayed through vibrational patterns (tactons) in smartwatch feedback techniques.
HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users
HomeSound, an in-home sound awareness system for Deaf and hard of hearing (DHH) users, consists of a microphone and display, and uses multiple devices installed in each home, similar to the Echo Show or Nest Hub.
VisAural:: a wearable sound-localisation device for people with impaired hearing
VisAural is a system that converts audible signals into visual cues, using an array of head-mounted microphones, and places LEDs at the periphery of the user's visual field to guide them to the source of the sound.
Adoption Of ASL Classifiers As Delivered By Head-Mounted Displays In A Planetarium Show
Accommodating the planetarium experience to members of the deaf or hard-of-hearing community has often created situations that are either disruptive to the rest of the audience or provide an
Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies
Findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
Light-Emitting Device for Supporting Auditory Awareness of Hearing-Impaired People during Group Conversations
This study proposes a novel wearable device that augments the auditory awareness of hearing impaired people to help them identify the speaker during group conversations by estimating the direction of the sound source and indicating the estimated direction in real time with light-emitting diodes (LEDs).
Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing
This paper design and evaluate visualizations for spatially locating sound on a head-mounted display (HMD) and developed eight high-level visual sound feedback dimensions, reaffirm past work on challenges faced by persons with hearing loss in group conversations, provide support for the general idea of sound awareness visualizations on HMDs, and reveal preferences for specific design options.