On the Two-View Geometry of Unsynchronized Cameras

@article{Albl2017OnTT,
  title={On the Two-View Geometry of Unsynchronized Cameras},
  author={Cenek Albl and Zuzana Kukelova and Andrew William Fitzgibbon and Jan Heller and Matej Sm{\'i}d and Tom{\'a}s Pajdla},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={5593-5602}
}
  • Cenek AlblZ. Kukelova T. Pajdla
  • Published 22 April 2017
  • Computer Science
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
We present new methods of simultaneously estimating camera geometry and time shift from video sequences from multiple unsynchronized cameras. Algorithms for simultaneous computation of a fundamental matrix or a homography with unknown time shift between images are developed. Our methods use minimal correspondence sets (eight for fundamental matrix and four and a half for homography) and therefore are suitable for robust estimation using RANSAC. Furthermore, we present an iterative algorithm… 

Figures and Tables from this paper

Simple Triangulation for Asynchronous Stereo Cameras

  • M. Shimizu
  • Computer Science
    2019 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)
  • 2019
A method to measure a three-dimensional trajectory using asynchronous stationary stereo cameras placed such that each camera is in the other's view angle, which enables the use of commercially available digital cameras in an outdoor environment and requires no impractically large calibration target or any estimation of the fundamental matrix.

Reconstruction of 3D ight trajectories from ad-hoc camera networks

It is shown that, in spite of the weakly constrained setting, recent developments in computer vision make it possible to reconstruct trajectories in 3D from unsynchronized, uncalibrated networks of consumer cameras, and this approach enables robust and accurate outside-in tracking of dynamically flying targets, with cheap and easy-to-deploy equipment.

Relative Pose Solvers using Monocular Depth

An off-the-shelf monocular depth network is used to provide an estimation of up-to-scale depth per pixel, and three new methods for solving for relative pose as well as a new algorithm for homography estimation are proposed.

Human Pose as Calibration Pattern: 3D Human Pose Estimation with Multiple Unsynchronized and Uncalibrated Cameras

A novel algorithm of estimating 3D human pose from multi-view videos captured by unsynchronized and uncalibrated cameras is proposed and an geometric constraint on the prior knowledge that the reference points consists of human joints is introduced.

Single-Frame based Deep View Synchronization for Unsynchronized Multi-Camera Surveillance

A synchronization model that works in conjunction with existing deep neural network (DNN)-based multiview models, thus avoiding the redesign of the whole model, is proposed and applied to different DNNs-based multicamera vision tasks under the unsynchronized setting, and achieve good performance compared to baselines.

Relative Pose from Deep Learned Depth and a Single Affine Correspondence

The proposed 1AC+D leads to similar accuracy as traditional approaches while being significantly faster than using traditional approaches, and is demonstrated on scenes from the 1DSfM dataset using a state-of-the-art global SfM algorithm.

3D Reconstruction from public webcams

It is shown that using recent advances in computer vision, the cameras are successfully calibrate, performed 3D reconstructions of the static scene and also recover the 3D trajectories of moving objects.

Automatic Rectification of the Hybrid Stereo Vision System

A perspective projection model is proposed to reduce the computation complexity of the hybrid stereoscopic 3D reconstruction and the accuracy and effectiveness of the proposed method for rectifying the dynamic hybrid stereo vision system automatically.

[Papers] Multi-view video synchronization using motion rhythms of human joints

The motion rhythm of 2D human joints is introduced as a cue for synchronization and the proposed method detects motion rhythms from videos and estimates temporal offsets with the best harmonized motion rhythms.

Multi-Video Temporal Synchronization by Matching Pose Features of Shared Moving Subjects

A new Synchronization Network (SynNet) is developed which includes a feature aggregation module, a matching cost volume and several classification layers to infer the time offset between different videos by exploiting view-invariant human pose features.

References

SHOWING 1-10 OF 37 REFERENCES

Self-calibration of asynchronized camera networks

  • M. NischtR. Swaminathan
  • Computer Science
    2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops
  • 2009
This paper presents a simple method to fully calibrate the asynchronized cameras of differing frame rates from the acquired video content directly and shows how this approach can be used for robust 3D reconstruction in spite of using as synchronized cameras.

Video synchronization from human motion using rank constraints

Tracking from multiple view points: Self-calibration of space and time

  • G. Stein
  • Computer Science
    Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
  • 1999
This paper tackles the problem of self-calibration of multiple cameras which are very far apart and proposes a three step approach: first they use moving objects in the scene to determine a rough planar alignment, next they use static features to improve the alignment, and finally they compute the epipolar geometry from the the homography matrix of the planar alignments.

View-invariant alignment and matching of video sequences

A dynamic programming approach using the similarity measurement is proposed to find the nonlinear time-warping function for videos containing human activities and shows a great improvement compared to state-of-the-art techniques.

Subframe Video Synchronization via 3D Phase Correlation

An iterative procedure that successively achieves the alignment in space and time is proposed and its convergence is experimentally verified, and subframe accuracy is achieved by extending the existing image subpixel registration scheme to subframe video synchronization.

Tri-focal tensor-based multiple video synchronization with subframe optimization

A novel method for synchronizing multiple (more than two) uncalibrated video sequences recording the same event by free-moving full-perspective cameras takes advantage of tri-view geometry constraints instead of the commonly used two-view one for their better performance in measuring geometric alignment when video frames are synchronized.

Markerless Motion Capture with unsynchronized moving cameras

This work presents an approach for markerless motion capture (MoCap) of articulated objects, which are recorded with multiple unsynchronized moving cameras, which allows us to track people with off-the-shelf handheld video cameras.

Linear Sequence-to-Sequence Alignment

A novel approach is presented that reduces the problem for general N to the robust estimation of a single line in RN, which captures all temporal relations between the sequences and can be computed without any prior knowledge of these relations.

Feature-Based Multi-video Synchronization with Subframe Accuracy

A novel algorithm for temporally synchronizing multiple videos capturing the same dynamic scene by using a stable RANSAC-based optimization approach that identifies an informative subset of video pairs which prevents the RansAC algorithm from being biased by outliers.

Synchronizing video sequences

  • T. TuytelaarsL. Gool
  • Computer Science
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
  • 2004
A novel method for automatically synchronizing two video sequences of the same event that starts from five point correspondences throughout the video sequences, that are provided using wide baseline matching and tracking techniques.