DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time

@article{Newcombe2015DynamicFusionRA,
  title={DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time},
  author={Richard A. Newcombe and Dieter Fox and Steven M. Seitz},
  journal={2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2015},
  pages={343-352}
}
We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the… 

Figures from this paper

SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes
TLDR
Experimental results show that the proposed approach can provide not only accurate environment maps but also well-reconstructed non-rigid targets, e.g., the moving humans.
Fusion4D: real-time performance capture of challenging scenes
TLDR
This work contributes a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time, highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes.
ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera.
MixedFusion: Real-Time Reconstruction of an Indoor Scene with Dynamic Objects
  • Hao Zhang, F. Xu
  • Computer Science
    IEEE Transactions on Visualization and Computer Graphics
  • 2018
TLDR
This paper develops an end-to-end system using a depth sensor to scan a scene on the fly and proposes a Sigmoid-based Iterative Closest Point (S-ICP) method, which fuses the geometry of both the static and dynamic objects in a scene in real time, which extends the usage of the current techniques for indoor scene reconstruction.
RigidFusion: RGB‐D Scene Reconstruction with Rigidly‐moving Objects
TLDR
RididFusion is a novel asynchronous moving‐object detection method, combined with a modified volumetric fusion that handles significantly more challenging reconstruction scenarios involving moving camera and improves moving‐ object detection.
Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements?
TLDR
A novel algorithm is proposed to extend existing rigid RGB-D reconstruction pipelines to handle non-rigid transformations to store in addition to the model also the non- Rigid transformation nrt of the current frame as a sparse warp field in the image space.
Deformable 3D Fusion: From Partial Dynamic 3D Observations to Complete 4D Models
TLDR
A template-less 4D reconstruction method that incrementally fuses highly-incomplete 3D observations of a deforming object, and generates a complete, temporally-coherent shape representation of the object.
DefSLAM: Tracking and Mapping of Deforming Scenes From Monocular Sequences
TLDR
This article presents DefSLAM, the first monocular SLAM capable of operating in deforming scenes in real time, which intertwines Shape-from-Template (SfT) and Non-Rigid Structure- from-Motion (NRSfM) techniques to deal with the exploratory sequences typical of SLAM.
PoseFusion2: Simultaneous Background Reconstruction and Human Shape Recovery in Real-time
TLDR
This work presents a fast, learning-based human object detector to isolate the dynamic human objects and realise a real-time dense background reconstruction framework and goes further by estimating and reconstructing the human pose and shape.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
TLDR
Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction, to enable real-time multi-touch interactions anywhere.
KinectFusion: Real-time dense surface mapping and tracking
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.
Real-time non-rigid reconstruction using an RGB-D camera
TLDR
A combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time, an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms.
Scanning and tracking dynamic objects with commodity depth cameras
TLDR
The quality of the model improves dramatically by fusing a sequence of noisy and incomplete depth data of human and that by deforming this fused model to later observations, noise-and-hole-free 3D models are generated for the human moving freely.
Temporally enhanced 3D capture of room-sized dynamic scenes with commodity depth cameras
TLDR
This paper introduces a system to capture the enhanced 3D structure of a room-sized dynamic scene with commodity depth cameras such as Microsoft Kinects and incorporates temporal information to achieve a noise-free and complete 3D capture of the entire room.
Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data
TLDR
The reconstruction framework is based upon a novel topology-aware adaptive subspace deformation technique that allows handling long sequences with complex geometry efficiently and accesses data in multiple sequential passes, so that long sequences can be streamed from hard disk, not being limited by main memory.
Robust single-view geometry and motion reconstruction
TLDR
The method makes use of a smooth template that provides a crude approximation of the scanned object and serves as a geometric and topological prior for reconstruction that allows faithful recovery of small-scale shape and motion features leading to a high-quality reconstruction.
Animation cartography—intrinsic reconstruction of shape and motion
TLDR
This article considers the problem of animation reconstruction, that is, the reconstruction of shape and motion of a deformable object from dynamic 3D scanner data, without using user-provided template models, and proposes a number of algorithmic building blocks that can handle fast motion, temporally disrupted input, and correctly match objects that disappear for extended time periods in acquisition holes due to occlusion.
Space-time surface reconstruction using incompressible flow
TLDR
A volumetric space-time technique for the reconstruction of moving and deforming objects from point data that optimization so that the distance material moves from one time frame to the next is bounded, the density of material remains constant, and the object remains compact.
Reconstruction of deforming geometry from time-varying point clouds
TLDR
A system for the reconstruction of deforming geometry from a time sequence of unstructured, noisy point clouds, as produced by recent real-time range scanning devices, is described, capable of robustly retrieving animated models with correspondences from data sets suffering from significant noise, outliers and acquisition holes.
...
1
2
3
4
...