Christine Evers

Learn More
In reverberant environments, a moving speaker yields a dynamically changing source-sensor geometry giving rise to a spatially-varying acoustic impulse response (AIR) between the source and sensor. Consequently, this leads to a time-varying convolutional relationship between the source signal and the observations and thus spectral colouration of the received(More)
This paper focuses on speaker tracking in robot audition for human-robot interaction. Using only acoustic signals, speaker tracking in enclosed spaces is subject to missing detections and spurious clutter measurements due to speech inactivity, reverberation and interference. Furthermore, many acoustic localization approaches estimate speaker direction,(More)
The accuracy of direction of arrival estimation tends to degrade under reverberant conditions due to the presence of reflected signal components which are correlated with the direct path. The recently proposed direct-path dominance test provides a means of identifying time-frequency regions in which a single signal path is dominant. By analysing only these(More)
Acoustic scene mapping creates a representation of positions of audio sources such as talkers within the surrounding environment of a microphone array. By allowing the array to move, the acoustic scene can be explored in order to improve the map. Furthermore, the spatial diversity of the kinematic array allows for estimation of the source-sensor distance in(More)
Enhancement of an unknown signal from distorted observations is an extremely important Engineering problem. In addition to noise, the observation space often contains a degrading filter component. A typical example is blind speech enhancement, where a reverberant channel between a stationary source and the receiver can be modeled as a static infinite(More)
Reverberation and noise cause significant deterioration of audio quality and intelligibility to signals recorded in acoustic environments. Noise is usually modeled as a common signal observed in the room and independent of room acoustics. However, this simplistic model cannot necessarily capture the effects of separate noise sources at different locations(More)
Acoustic Simultaneous Localization and Mapping (a-SLAM) jointly localizes the trajectory of a microphone array installed on a moving platform, whilst estimating the acoustic map of surrounding sound sources, such as human speakers. Whilst traditional approaches for SLAM in the vision and optical research literature rely on the assumption that the(More)
Direction of arrival DOA estimation is a fundamental problem in acoustic signal processing. It is used in a diverse range of applications, including spatial filtering, speech dereverberation, source separation and diarization. Intensity vector-based DOA estimation is attractive, especially for spherical sensor arrays, because it is computationally(More)