Hearing aid Research Data Set for Acoustic Environment Recognition

@article{Hwel2020HearingAR,
  title={Hearing aid Research Data Set for Acoustic Environment Recognition},
  author={Andreas H{\"u}wel and Kamil Adiloglu and J{\"o}rg-Hendrik Bach},
  journal={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2020},
  pages={706-710}
}
State-of-the-art hearing aids (HA) are limited in recognizing acoustic environments. Much effort is spent on research to improve listening experience for HA users in every acoustic situation. There is, however, no dedicated public database to train acoustic environment recognition algorithms with a specific focus on HA applications accounting for their requirements. Existing acoustic scene classification databases are inappropriate for HA signal processing. In this work we propose a novel… 

Figures and Tables from this paper

Technical Paper: Deep Scattering Spectrum with Mobile Network for Low Complexity Acoustic Scene Classification
TLDR
This paper proposed the use of DSS with mobile network to tackle low complexity computation and the classification model submitted to DCASE 2021 Task1a challenge is proposed.
Connected Hearing Devices and Audiologists: The User-Centered Development of Digital Service Innovations
TLDR
After the user-centered development of the different service innovations which are designed to converge on an integrated service platform, the functionality and applicability of the system as well as the associated role models between the technical system, the hearing device users and audiologists are evaluated.
FSD50K: An Open Dataset of Human-Labeled Sound Events
TLDR
FSD50K is introduced, an open dataset containing over 51 k audio clips totalling over 100 h of audio manually labeled using 200 classes drawn from the AudioSet Ontology, to provide an alternative benchmark dataset and thus foster SER research.
Instance-level loss based multiple-instance learning for acoustic scene classification
TLDR
This study develops the MIL framework more suitable for ASC systems, adopting instance-level labels and instance- level loss, which are effective in extracting and clustering instances and is more practical compared to previous systems on the DCASE 2019 challenge task 1-A leader board.

References

SHOWING 1-10 OF 20 REFERENCES
TUT database for acoustic scene classification and sound event detection
TLDR
The recording and annotation procedure, the database content, a recommended cross-validation setup and performance of supervised acoustic scene classification system and event detection baseline system using mel frequency cepstral coefficients and Gaussian mixture models are presented.
Database of Multichannel In-Ear and Behind-the-Ear Head-Related and Binaural Room Impulse Responses
TLDR
An eight-channel database of head-related impulse responses and binaural room impulse responses is introduced, allowing for a realistic construction of simulated sound fields for hearing instrument research and, consequently, for a realism evaluation of hearing instrument algorithms.
The fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines
TLDR
The 5th CHiME Challenge is introduced, which considers the task of distant multi-microphone conversational ASR in real home environments and describes the data collection procedure, the task, and the baseline systems for array synchronization, speech enhancement, and conventional and end-to-end ASR.
A multi-device dataset for urban acoustic scene classification
TLDR
The acoustic scene classification task of DCASE 2018 Challenge and the TUT Urban Acoustic Scenes 2018 dataset provided for the task are introduced, and the performance of a baseline system in the task is evaluated.
Hardware / Software Architecture for Services in the Hearing Aid Industry
TLDR
This paper provides a brief overview of the connectivity features available in modern hearing aids as well as the recent developments in acoustic scenes classifiers before describing an hardware / software architecture designed to exploit advances made in both those fields.
The second ‘chime’ speech separation and recognition challenge: Datasets, tasks and baselines
TLDR
This paper is intended to be a reference on the 2nd `CHiME' Challenge, an initiative designed to analyze and evaluate the performance of ASR systems in a real-world domestic environment.
ESC: Dataset for Environmental Sound Classification
TLDR
A new annotated collection of 2000 short clips comprising 50 classes of various common sound events, and an abundant unified compilation of 250000 unlabeled auditory excerpts extracted from recordings available through the Freesound project are presented.
Histogram of gradients of Time-Frequency Representations for Audio scene detection
TLDR
This paper addresses the problem of audio scenes classification and contributes to the state of the art by proposing a novel feature by considering histogram of gradients (HOG) of time-frequency representation of an audio scene and evaluating its performances with state-of-the-art competitors.
Mel Frequency Cepstral Coefficients for Music Modeling
TLDR
The results show that the use of the Mel scale for modeling music is at least not harmful for this problem, although further experimentation is needed to verify that this is the optimal scale in the general case and whether this transform is valid for music spectra.
Musical genre classification of audio signals
TLDR
The automatic classification of audio signals into an hierarchy of musical genres is explored and three feature sets for representing timbral texture, rhythmic content and pitch content are proposed.
...
1
2
...