Audiovisual Voice Activity Detection Based on Microphone Arrays and Color Information

Abstract

Audiovisual voice activity detection is a necessary stage in several problems, such as advanced teleconferencing, speech recognition, and human-computer interaction. Lip motion and audio analysis provide a large amount of information that can be integrated to produce more robust audiovisual voice activity detection (VAD) schemes, as we discuss in this paper. Lip motion is very useful for detecting the active speaker, and in this paper we introduce a new approach for lips and visual VAD. First, the algorithm performs skin segmentation to reduce the search area for lip extraction, and the most likely lip and non-lip regions are detected using a Bayesian approach within the delimited area. Lip motion is then detected using Hidden Markov Models (HMMs) that estimate the likely occurrence of active speech within a temporal window. Audio information is captured by an array of microphones, and the sound-based VAD is related to finding spatio-temporally coherent sound sources through another set of HMMs. To increase the robustness of the proposed system, a late fusion approach is employed to combine the result of each modality (audio and video). Our experimental results indicate that the proposed audiovisual approach presents better results when compared to existing VAD algorithms.

DOI: 10.1109/JSTSP.2012.2237379

8 Figures and Tables

Cite this paper

@article{Minotto2013AudiovisualVA, title={Audiovisual Voice Activity Detection Based on Microphone Arrays and Color Information}, author={Vicente P. Minotto and Carlos B. O. Lopes and Jacob Scharcanski and Cl{\'a}udio Rosito Jung and Bowon Lee}, journal={IEEE Journal of Selected Topics in Signal Processing}, year={2013}, volume={7}, pages={147-156} }