Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing

  title={Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing},
  author={Steven M. Goodman and Ping Liu and Emma J. McDonnell and Jon E. Froehlich and Steven M. Goodman and Ping Liu and Dhruv Jain and Emma J. McDonnell and Jon E. Froehlich},
  journal={Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies},
  pages={1 - 23}
Automated sound recognition tools can be a useful complement to d/Deaf and hard of hearing (DHH) people's typical communication and environmental awareness strategies. Pre-trained sound recognition models, however, may not meet the diverse needs of individual DHH users. While approaches from human-centered machine learning can enable non-expert users to build their own automated systems, end-user ML solutions that augment human sensory abilities present a unique challenge for users who have… 

Figures and Tables from this paper

ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users

ProtoSound is introduced, an interactive system for customizing sound recognition models by recording a few examples, thereby enabling personalized and fine-grained categories and discussing open challenges in personalizable sound recognition, including the need for better recording interfaces and algorithmic improvements.

LiSee: A Headphone that Provides All-day Assistance for Blind and Low-vision Users to Reach Surrounding Objects

The results show that LiSee works robustly, indicating that it can meet the daily needs of most participants to reach surrounding objects and is likely to provide BLV users with all-day assistance.

Blind Users Accessing Their Training Images in Teachable Object Recognizers

This work engineer data descriptors that indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other, built into open source testbed iOS app, called MYCam.

MERLOT RESERVE: Neural Script Knowledge through Vision and Language and Sound

This work introduces @MERLOT RESERVE, a model that represents videos jointly over time - through a new training objective that learns from audio, subtitles, and video frames, and obtains competitive results on four video tasks, even outperforming supervised approaches on the recently proposed Situated Reasoning (STAR) benchmark.

SIG: Towards More Personal Health Sensing

This Special Interest Group aims to bring in researchers from different fields, identify the significance and challenges of the personal health sensing domain, discuss potential solutions and future research directions, and promote collaborative research opportunities.


The LiSee prototype, which integrates various electronic components and is disguised as a neckband headphone such that it is an extension of the existing headphone, works robustly and can meet the daily needs of most participants to reach surrounding objects.



Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception

This work investigates how non-expert deaf and hard-of-hearing people understand ML technologies and design ML-based sound recognition systems and clarifies that non-Expert DHH people start to overcome the knowledge gap.

A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users

A mobile phone app that alerts deaf and hard-of-hearing people to sounds they care about is designed, and the viability of a basic machine learning algorithm for sound detection is explored.

Design and evaluation of a smartphone application for non-speech sound awareness for people with hearing loss

  • M. MielkeR. Brück
  • Physics
    2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2015
A flexible and mobile assistive device basing on a smartphone that detects and recognizes acoustic events by analysing the acoustic environment of the user and using pattern recognition algorithms the user can define the sounds that should be recognised by the device.

Scribe4Me: Evaluating a Mobile Sound Transcription Tool for the Deaf

A 2-week field study of an exploratory prototype of a mobile sound transcription tool for the deaf and hard-of-hearing shows that the approach is feasible, highlights particular contexts in which it is useful, and provides information about what should be contained in transcriptions.

UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones

UbiEar, a smartphone-based acoustic event sensing and notification system, is designed and shown that UbiEar can assist the young DHH students in awareness of important acoustic events in their daily life.

Evaluating non-speech sound visualizations for the deaf

An iterative investigation of peripheral, visual displays of ambient sounds, providing valuable information about the sound awareness needs of the deaf and can help to inform further design of such applications.

AUDIS wear: A smartwatch based assistive device for ubiquitous awareness of environmental sounds

  • M. MielkeR. Brück
  • Computer Science
    2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2016
Based on a smartwatch and algorithms from pattern recognition, a prototype for awareness of environmental sounds is presented here, which observes the acoustic environment of the user and detects environmental sounds.

Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies

Findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.

Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing

This paper design and evaluate visualizations for spatially locating sound on a head-mounted display (HMD) and developed eight high-level visual sound feedback dimensions, reaffirm past work on challenges faced by persons with hearing loss in group conversations, provide support for the general idea of sound awareness visualizations on HMDs, and reveal preferences for specific design options.

HomeSound: An Iterative Field Deployment of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users

HomeSound, an in-home sound awareness system for Deaf and hard of hearing (DHH) users, consists of a microphone and display, and uses multiple devices installed in each home, similar to the Echo Show or Nest Hub.