UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones

@article{Liu2017UbiEarBL,
  title={UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-hearing People with Smartphones},
  author={Sicong Liu and Zimu Zhou and Junzhao Du and Longfei Shangguan and Jun Han and Xin Wang},
  journal={Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
  year={2017},
  volume={1},
  pages={17:1-17:21}
}
Non-speech sound-awareness is important to improve the quality of life for the deaf and hard-of-hearing (DHH) people. DHH people, especially the young, are not always satisfied with their hearing aids. According to the interviews with 60 young hard-of-hearing students, a ubiquitous sound-awareness tool for emergency and social events that works in diverse environments is desired. In this paper, we design UbiEar, a smartphone-based acoustic event sensing and notification system. Core techniques… 
SoundWatch: Exploring Smartwatch-based Deep Learning Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users
TLDR
A performance evaluation of four low-resource deep learning sound classification models: MobileNet, Inception, ResNet-lite, and VGG-lite across four device architectures: watch-only, watch+phone, watch-+phone+cloud, and watch+cloud finds that the watch+phones architecture provided the best balance between CPU, memory, network usage, and classification latency.
Mobile Sound Recognition for the Deaf and Hard of Hearing
TLDR
An exploratory study in the domain of assistive computing, eliciting requirements and presenting solutions to problems found in the development of an environmental sound recognition system, which aims to assist deaf and hard of hearing people in the perception of sounds.
Toward User-Driven Sound Recognizer Personalization with People Who Are d/Deaf or Hard of Hearing
TLDR
To better understand how DHH users can drive personalization of their own assistive sound recognition tools, a three-part study with 14 DHH participants highlights a positive subjective experience when recording and interpreting training data in situ, but uncovers several key pitfalls unique to D HH users.
Enssat: wearable technology application for the deaf and hard of hearing
TLDR
Enssat is presented, a bilingual (Arabic/English) smartphone-based hearing aid application that uses Google Glass to assist DHH individuals and demonstrates the ease of use and utility of the application.
Deaf and Hard-of-hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies
TLDR
Findings related to sound type, full captions vs. keywords, sound filtering, notification styles, and social context provide direct guidance for the design of future mobile and wearable sound awareness systems.
Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing
TLDR
A general interest in smarthome-based sound awareness systems is suggested particularly for displaying contextually aware, personalized and glanceable visualizations but key concerns arose related to privacy, activity tracking, cognitive overload, and trust.
Use of Machine Learning by Non-Expert DHH People: Technological Understanding and Sound Perception
TLDR
This work investigates how non-expert deaf and hard-of-hearing people understand ML technologies and design ML-based sound recognition systems and clarifies that non-Expert DHH people start to overcome the knowledge gap.
GestEar: combining audio and motion sensing for gesture recognition on smartwatches
TLDR
A lightweight convolutional neural network architecture for gesture recognition, specifically designed to run locally on resource-constrained devices, which achieves a user-independent recognition accuracy of 97.2% for nine distinct gestures.
EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People
TLDR
A new prototype called “EarVR” that can be mounted on any desktop or mobile VR Head-Mounted Display (HMD) that analyzes 3D sounds in a VR environment and locates the direction of the sound source that is closest to a user and notifies the user about the sound direction using two vibro-motors placed on the user's ears.
...
...

References

SHOWING 1-10 OF 51 REFERENCES
Poster: MobiEar-Building an Environment-independent Acoustic Sensing Platform for the Deaf using Deep Learning
TLDR
By leveraging the microphone on commodity smartphones, universal sound awareness applications are becoming possible and deep learning models have large leaps in accuracy and robustness, which will help maintain the safety awareness through the acoustic alarm.
Design and evaluation of a smartphone application for non-speech sound awareness for people with hearing loss
  • M. Mielke, R. Brück
  • Physics
    2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
  • 2015
TLDR
A flexible and mobile assistive device basing on a smartphone that detects and recognizes acoustic events by analysing the acoustic environment of the user and using pattern recognition algorithms the user can define the sounds that should be recognised by the device.
A Personalizable Mobile Sound Detector App Design for Deaf and Hard-of-Hearing Users
TLDR
A mobile phone app that alerts deaf and hard-of-hearing people to sounds they care about is designed, and the viability of a basic machine learning algorithm for sound detection is explored.
DeepEar: robust smartphone audio sensing in unconstrained acoustic environments using deep learning
TLDR
This paper presents DeepEar -- the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks and shows DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone's battery daily.
VisAural:: a wearable sound-localisation device for people with impaired hearing
TLDR
VisAural is a system that converts audible signals into visual cues, using an array of head-mounted microphones, and places LEDs at the periphery of the user's visual field to guide them to the source of the sound.
SoundSense: scalable sound sensing for people-centric applications on mobile phones
TLDR
This paper proposes SoundSense, a scalable framework for modeling sound events on mobile phones that represents the first general purpose sound sensing system specifically designed to work on resource limited phones and demonstrates that SoundSense is capable of recognizing meaningful sound events that occur in users' everyday lives.
Visualizing non-speech sounds for the deaf
TLDR
An investigation of peripheral, visual displays to help people who are deaf maintain an awareness of sounds in the environment and presents a set of visual design preferences and functional requirements for peripheral visualizations of non-speech audio that will help improve future applications.
Can you see what i hear?: the design and evaluation of a peripheral sound display for the deaf
TLDR
Two visual displays for providing awareness of environmental audio to deaf individuals are developed that support both monitoring and notification of sounds, support discovery of new sounds, and do not require a priori knowledge of sounds to be detected.
Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing
TLDR
This paper design and evaluate visualizations for spatially locating sound on a head-mounted display (HMD) and developed eight high-level visual sound feedback dimensions, reaffirm past work on challenges faced by persons with hearing loss in group conversations, provide support for the general idea of sound awareness visualizations on HMDs, and reveal preferences for specific design options.
BodyBeat: a mobile system for sensing non-speech body sounds
TLDR
The results show that BodyBeat outperforms other existing solutions in capturing and recognizing different types of important non-speech body sounds.
...
...