Wide Baseline Matching between Unsynchronized Video Sequences

Abstract

3D reconstruction of a dynamic scene from features in two cameras usually requires synchronization and correspondences between the cameras. These may be hard to achieve due to occlusions, different orientation, different scales, etc. In this work we present an algorithm for reconstructing a dynamic scene from sequences acquired by two uncalibrated non-synchronized fixed affine cameras. It is assumed that (possibly) different points are tracked in the two sequences. The only constraint relating the two cameras is that every 3D point tracked in one sequence can be described as a linear combination of some of the 3D points tracked in the other sequence. Such constraint is useful, for example, for articulated objects. We may track some points on an arm in the first sequence, and some other points on the same arm in the second sequence. On the other extreme, this model can be used for generally moving points tracked in both sequences without knowing the correct permutation. In between, this model can cover non-rigid bodies with local rigidity constraints. We present linear algorithms for synchronizing the two sequences and reconstructing the 3D points tracked in both views. Outlier points are automatically detected and discarded. The algorithm can handle both 3D objects and planar objects in a unified framework, therefore avoiding numerical problems existing in other methods.

DOI: 10.1007/s11263-005-4841-0

Extracted Key Phrases

4 Figures and Tables

Statistics

0510'04'05'06'07'08'09'10'11'12'13'14'15'16'17
Citations per Year

57 Citations

Semantic Scholar estimates that this publication has 57 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Wolf2005WideBM, title={Wide Baseline Matching between Unsynchronized Video Sequences}, author={Lior Wolf and Assaf Zomet}, journal={International Journal of Computer Vision}, year={2005}, volume={68}, pages={43-52} }