ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving Cameras in the Wild

@inproceedings{Zhao2022ParticleSfMED,
  title={ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving Cameras in the Wild},
  author={Wang Zhao and Shao-Hui Liu and Hengkai Guo and Wenping Wang and Y. Liu},
  booktitle={European Conference on Computer Vision},
  year={2022}
}
. Estimating the pose of a moving camera from monocular video is a challenging problem, especially due to the presence of moving objects in dynamic environments, where the performance of existing camera pose estimation methods are susceptible to pixels that are not geometrically consistent. To tackle this challenge, we present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence initialized from pairwise optical flow. Our key idea is to optimize… 

Robust Dynamic Radiance Fields

This work addresses the robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters (poses and focal length) and shows favorable performance over the state-of-the-art dynamic view synthesis methods.

References

SHOWING 1-10 OF 95 REFERENCES

Scalable structure from motion for densely sampled videos

This work describes the first system that is capable of handling high resolution, high frame-rate video data with close to real-time performance and can robustly integrate data from different video sequences, allowing multiple video streams to be simultaneously calibrated in an efficient and globally optimal way.

Robust Consistent Video Depth Estimation

An algorithm for estimating consistent dense depth maps and camera poses from a monocular video that quantitatively outperforms state-of-the-arts on the Sintel benchmark for both depth and pose estimations and attains favorable qualitative results across diverse wild datasets.

Recovering Accurate 3D Human Pose in the Wild Using IMUs and a Moving Camera

This work proposes a method that combines a single hand-held camera and a set of Inertial Measurement Units (IMUs) attached at the body limbs to estimate accurate 3D poses in the wild and obtains an accuracy of 26 mm, which makes it accurate enough to serve as a benchmark for image-based 3D pose estimation in theWild.

Robust Dense Mapping for Large-Scale Dynamic Environments

A stereo-based dense mapping algorithm for large-scale dynamic urban environments that simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments.

Robust camera location estimation by convex programming

  • Onur ÖzyesilA. Singer
  • Computer Science, Mathematics
    2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
This paper provides a complete characterization of well-posed instances of the location estimation problem, by presenting its relation to the existing theory of parallel rigidity, and introduces a two-step approach, comprised of a pairwise direction estimation method robust to outliers in point correspondences between image pairs, and a convex program to maintain robustness to outlier directions.

Particle Video: Long-Range Motion Estimation Using Point Trajectories

  • P. SandS. Teller
  • Computer Science
    2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
  • 2006
A new approach to motion estimation in video using a set of particles that is useful for a variety of applications and cannot be directly obtained using existing methods such as optical flow or feature tracking.

DTAM: Dense tracking and mapping in real-time

It is demonstrated that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application is shown.

It's Moving! A Probabilistic Model for Causal Motion Segmentation in Moving Camera Videos

This work derives from first principles a likelihood function for assessing the probability of an optical flow vector given the 2D motion direction of an object and develops a motion segmentation algorithm that beats current state-of-the-art methods by a large margin.

Global Fusion of Relative Motions for Robust, Accurate and Scalable Structure from Motion

This work proposes a new global calibration approach based on the fusion of relative motions between image pairs, and presents an efficient a contrario trifocal tensor estimation method, from which stable and precise translation directions can be extracted.

DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes

DynaSLAM is a visual SLAM system that, building on ORB-SLAM2, adds the capabilities of dynamic object detection and background inpainting, and outperforms the accuracy of standard visualSLAM baselines in highly dynamic scenarios.
...