Towards Realistic Immersive Audiovisual Simulations for Hearing Research: Capture, Virtual Scenes and Reproduction

@article{Llorach2018TowardsRI,
  title={Towards Realistic Immersive Audiovisual Simulations for Hearing Research: Capture, Virtual Scenes and Reproduction},
  author={Gerard Llorach and Giso Grimm and Maartje M. E. Hendrikse and Volker Hohmann},
  journal={Proceedings of the 2018 Workshop on Audio-Visual Scene Understanding for Immersive Multimedia},
  year={2018}
}
  • Gerard Llorach, G. Grimm, V. Hohmann
  • Published 26 October 2018
  • Computer Science
  • Proceedings of the 2018 Workshop on Audio-Visual Scene Understanding for Immersive Multimedia
Most current hearing research laboratories and hearing aid evaluation setups are not sufficient to simulate real-life situations and to evaluate future generations of hearing aids that might include gaze information and brain signals. Thus, new methodologies and technologies might need to be implemented in hearing laboratories and clinics in order to generate audiovisual realistic testing environments. The aim of this work is to provide a comprehensive review of the current available approaches… 

Figures and Tables from this paper

The Virtual Reality Lab: Realization and Application of Virtual Sound Environments
TLDR
The results show similarities and differences in subject behavior and performance between the lab and the field, indicating that the virtual reality lab in its current state marks a step towards more ecological validity in lab-based hearing and hearing device research, but requires further development towards higher levels of ecological validity.
Evaluating the User in a Sound Localisation Task in a Virtual Reality Application
TLDR
An immersive VR spatial audio application enables the ability of users to specify or localise the source of a sound and gives insight into a user's abilities to localise sound sources in VR from a quality of experience (QoE) perspective.
Movement and Gaze Behavior in Virtual Audiovisual Listening Environments Resembling Everyday Life
TLDR
Analysis of the movement data showed that movement behavior depends on the VE and the age of the subject and is predictable in multitalker conversations and for moving distractors, and evaluation of the questionnaires indicated that the VEs are sufficiently realistic.
Vehicle Noise : Loudness Ratings , Loudness Models and Future Experiments with Audiovisual Immersive Simulations
Loudness complaints are still very common among hearing-aid users. Therefore, loudness is an important issue that needs to be addressed in hearing aid research to improve and optimize hearing aid
Audio-visual stimuli for the evaluation of speech-enhancing algorithms
The benefit from speech-enhancing algorithms in hearing devices may depend not only on the acoustic environment, but also on the audio-visual perception of speech, e.g., when lip reading, and on
Spatial Cue Distortions Within a Virtualized Sound Field Caused by an Additional Listener
Realistically, we are rarely alone in a central position with respect to our acoustic environment, yet virtual sound fields are usually evaluated in this manner. Sound presentation with more than one
Review of Self-Motion in the Context of Hearing and Hearing Device Research.
TLDR
It is still unclear to what extent individual factors affect the ecological validity of the findings, and further research is required to relate lab-based measures of self-motion to the individual's real-life hearing ability.
Vehicle Noise: Comparison of Loudness Ratings in the Field and the Laboratory
Objective: Distorted loudness perception is one of the main complaints of hearing aid users. Being able to measure loudness perception correctly in the clinic is essential for fitting hearing aids.
Master Thesis
The classical architectural types of concert halls have traditionally been ascribed a characteristic acoustic spatial impression, which is conditioned by the architectural shape. But, how easily can
Development and evaluation of video recordings for the OLSA matrix sentence test
TLDR
An audiovisual version of the German matrix sentence test (MST), which uses the existing audio-only speech material, achieved similar results as the literature in terms of gross speech intelligibility, despite the inherent asynchronies of dubbing.
...
1
2
...

References

SHOWING 1-10 OF 53 REFERENCES
Realistic virtual audiovisual environments for evaluating hearing aids with measures related to movement behavior
With increased complexity of hearing device algorithms a strong interaction between motion behavior of the user and hearing device benefit is likely to be found. To be able to assess this interaction
Virtual acoustic environments for comprehensive evaluation of model-based hearing devices *
TLDR
The software architecture and the simulation methods used to produce VAEs are outlined, and a set of VAEs rendered with the proposed software was described.
Toolbox for acoustic scene creation and rendering (TASCAR): Render methods and research applications
TASCAR is a toolbox for creation and rendering of dynamic acoustic scenes that allows direct user interaction and was developed for application in hearing aid research. This paper describes the
Evaluation of spatial audio reproduction schemes for application in hearing aid research
TLDR
The results show performance differences and interaction effects between reproduction method and algorithm class that may be used for guidance when selecting the appropriate method and number of speakers for specific tasks in hearing aid research.
Enhancement of ambisonic binaural reproduction using directional audio coding with optimal adaptive mixing
TLDR
This paper proposes an improved DirAC method that directly synthesises the binaural cues based on the estimated spatial parameters and can accommodate higher-order Ambisonics (HOA) signals and has reduced computational requirements, making it suitable for lightweight processing with fast update rates and head-tracking support.
Design and preliminary testing of a visually guided hearing aid.
An approach to hearing aid design is described, and preliminary acoustical and perceptual measurements are reported, in which an acoustic beam-forming microphone array is coupled to an
Interactive simulation and free-field auralization of acoustic space with the rtSOFE
The Simulated Open Field Environment (SOFE), a loudspeaker setup in an anechoic chamber to render sound sources along with their simulated, spatialized reflections, has been used for more than two
Spatial Acoustic Scenarios in Multichannel Loudspeaker Systems for Hearing Aid Evaluation.
TLDR
HA benefit, as predicted by signal-to-noise ratio (SNR) and speech intelligibility measures, differs between the reference condition and more realistic conditions for the tested beamformer algorithms.
Localization of virtual sources in multichannel audio reproduction
TLDR
The results show that the auditory model can be used in the prediction of perceived direction in multichannel sound reproduction near the median plane, and frequency-dependent capability to produce narrow-band virtual sources to targeted directions is reported.
...
1
2
3
4
5
...