Online learning and fusion of orientation appearance models for robust rigid object tracking


We present a robust framework for learning and fusing different modalities for rigid object tracking. Our method fuses data obtained from a standard visual camera and dense depth maps obtained by low-cost consumer depths cameras such as the Kinect. To combine these two completely different modalities, we propose to use features that do not depend on the… (More)
DOI: 10.1109/FG.2013.6553798