Multi-sensor Fusion Using Dempster's Theory of Evidence for Video Segmentation

Abstract

Segmentation of image sequences is a challenging task in computer vision. Time-of-Flight cameras provide additional information, namely depth, that can be integrated as an additional feature in a segmentation approach. Typically, the depth information is less sensitive to environment changes. Combined with appearance, this yields a more robust segmentation method. Motivated by the fact that a simple combination of two information sources might not be the best solution, we propose a novel scheme based on Dempster’s theory of evidence. In contrast to existing methods, the use of Dempster’s theory of evidence allows to model inaccuracy and uncertainty. The inaccuracy of the information is influenced by an adaptive weight, that provides a measurement of how reliable a certain information might be. We compare our method with others on a publicly available set of image sequences. We show that the use of our proposed fusion scheme improves the segmentation.

DOI: 10.1007/978-3-642-41827-3_54

Extracted Key Phrases

4 Figures and Tables

Cite this paper

@inproceedings{Scheuermann2013MultisensorFU, title={Multi-sensor Fusion Using Dempster's Theory of Evidence for Video Segmentation}, author={Bj{\"{o}rn Scheuermann and Sotirios Gkoutelitsas and Bodo Rosenhahn}, booktitle={CIARP}, year={2013} }