Temporally Coherent 4D Reconstruction of Complex Dynamic Scenes

@article{Mustafa2016TemporallyC4,
  title={Temporally Coherent 4D Reconstruction of Complex Dynamic Scenes},
  author={Armin Mustafa and Hansung Kim and Jean-Yves Guillemaut and Adrian Hilton},
  journal={2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2016},
  pages={4660-4669}
}
This paper presents an approach for reconstruction of 4D temporally coherent models of complex dynamic scenes. No prior knowledge is required of scene structure or camera calibration allowing reconstruction from multiple moving cameras. Sparse-to-dense temporal correspondence is integrated with joint multi-view segmentation and reconstruction to obtain a complete 4D representation of static and dynamic objects. Temporal coherence is exploited to overcome visual ambiguities resulting in improved… 
Temporally Coherent General Dynamic Scene Reconstruction
TLDR
This paper demonstrates unsupervised reconstruction of complete temporally coherent 4D scene models with improved non-rigid object segmentation and shape reconstruction and its application to various applications such as free-view rendering and virtual reality.
Semantically Coherent Co-Segmentation and Reconstruction of Dynamic Scenes
  • A. Mustafa, A. Hilton
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
Evaluation on challenging indoor and outdoor sequences with hand-held moving cameras shows improved accuracy in segmentation, temporally coherent semantic labelling and 3D reconstruction of dynamic scenes.
U4D: Unsupervised 4D Dynamic Scene Understanding
TLDR
This work simultaneously estimates a detailed model that includes a per-pixel semantically and temporally coherent reconstruction, together with instance-level segmentation exploiting photo-consistency, semantic and motion information, that enables per person semantic instance segmentation of multiple interacting people in complex dynamic scenes.
Semantically Coherent 4D Scene Flow of Dynamic Scenes
TLDR
Comprehensive performance evaluation against state-of-the-art techniques on challenging indoor and outdoor sequences with hand-held moving cameras shows improved accuracy in 4D scene flow, segmentation, temporally coherent semantic labelling, and reconstruction of dynamic scenes.
U 4 D : Unsupervised 4 D Dynamic Scene Understanding
TLDR
This approach simultaneously estimates a detailed model that includes a per-pixel semantically and temporally coherent reconstruction, together with instance-level segmentation exploiting photoconsistency, semantic and motion information, that enables per person semantic instance segmentation of multiple interacting people in complex dynamic scenes.
Multi-view Dynamic Shape Refinement Using Local Temporal Integration
TLDR
A templateless and local approach to 4D shape reconstructions in multi-view environments and it is shown that it improves reconstruction accuracy by considering multiple frames, and a multi-camera synthetic dataset that provides ground-truth data for mid-scale dynamic scenes is introduced.
4D Temporally Coherent Light-Field Video
TLDR
A novel method to obtain Epipolar Plane Images (EPIs) from a spare lightfield camera array is proposed to extract a spatio-temporally coherent light-field video representation.
Dynamic Scene Novel View Synthesis via Deferred Spatio-temporal Consistency
4D Match Trees for Non-rigid Surface Alignment
TLDR
Comparison to previous 2D and 3D scene flow demonstrates that 4D Match Trees achieve reduced errors due to drift and improved robustness to large non-rigid deformations to obtain a temporally consistent 4D representation.
Few-camera Dynamic Scene Variational Novel-view Synthesis
TLDR
A variational diffusion formulation on depths and colors that lets SfM and NVS robustly cope with the noise by enforcing spatio-temporal consistency via per-pixel reprojection weights derived from the input views and expands the kinds of scenes to which these techniques can be applied.
...
...

References

SHOWING 1-10 OF 50 REFERENCES
3D Reconstruction of Dynamic Scenes with Multiple Handheld Cameras
TLDR
This work proposes a novel dense depth estimation method which can automatically recover accurate and consistent depth maps from the synchronized video sequences taken by a few handheld cameras, and simultaneously solves bilayer segmentation and depth estimation in a unified energy minimization framework.
Multi-object reconstruction from dynamic scenes: An object-centered approach
Simultaneous Segmentation and 3D Reconstruction of Monocular Image Sequences
TLDR
The necessary parts of such an algorithm, which simultaneously tracks features, groups them into rigidly moving segments, and reconstructs all segments in 3D, are identified, and solutions are proposed.
3D Reconstruction of Dynamic Textures in Crowd Sourced Data
TLDR
This work combines large scale crowd sourced SfM techniques with image content segmentation and shape from silhouette techniques to build an iterative framework for 3D shape estimation that enables more complete and robust 3D modeling and enables more realistic visualizations through the identification of dynamic scene elements amenable to dynamic texture mapping.
Temporally Consistent Reconstruction from Multiple Video Streams Using Enhanced Belief Propagation
TLDR
The belief propagation algorithm is modified to operate on a 3D graph that includes both spatial and temporal neighbors and to be able to discard messages from outlying neighbors, and methods for introducing a bias and for suppressing noise typically observed in uniform regions are proposed.
Joint Multi-Layer Segmentation and Reconstruction for Free-Viewpoint Video Applications
TLDR
This paper proposes a technique which is able to efficiently compute a high-quality scene representation via graph-cut optimisation of an energy function combining multiple image cues with strong priors in a view-dependent manner with respect to each input camera.
Space-Time Joint Multi-layer Segmentation and Depth Estimation
  • Jean-Yves Guillemaut, A. Hilton
  • Business
    2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission
  • 2012
TLDR
This work proposes a framework for joint segmentation and reconstruction which explicitly enforces temporal consistency by formulating the problem as an energy minimisation generalised to groups of frames, and uses optical flow in combination with a confidence measure to impose robust temporal smoothness constraints.
Modeling Dynamic Scenes Recorded with Freely Moving Cameras
TLDR
A probabilistic framework is proposed to deal with dynamic scenes captured in outdoor environments with moving cameras and to provide a volumetric reconstruction of all the dynamic elements of the scene.
Calibration of Nodal and Free-Moving Cameras in Dynamic Scenes for Post-Production
TLDR
An algorithm for through-the-lens calibration of a moving camera for a common scenario in film production and broadcasting that can identify a subset of static cameras that are more likely to generate a high number of scene-image correspondences, and can robustly deal with dynamic scenes.
Space-time isosurface evolution for temporally coherent 3D reconstruction
  • Bastian Goldlücke, M. Magnor
  • Mathematics
    Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
  • 2004
TLDR
The geometry reconstructed by this scheme is significantly better than results obtained by space-carving approaches that do not enforce temporal coherence and designed to optimize photo-consistency.
...
...