DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time
@article{Newcombe2015DynamicFusionRA, title={DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time}, author={Richard A. Newcombe and Dieter Fox and Steven M. Seitz}, journal={2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2015}, pages={343-352} }
We present the first dense SLAM system capable of reconstructing non-rigidly deforming scenes in real-time, by fusing together RGBD scans captured from commodity sensors. Our DynamicFusion approach reconstructs scene geometry whilst simultaneously estimating a dense volumetric 6D motion field that warps the estimated geometry into a live frame. Like KinectFusion, our system produces increasingly denoised, detailed, and complete reconstructions as more measurements are fused, and displays the…
708 Citations
SplitFusion: Simultaneous Tracking and Mapping for Non-Rigid Scenes
- Computer Science2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2020
Experimental results show that the proposed approach can provide not only accurate environment maps but also well-reconstructed non-rigid targets, e.g., the moving humans.
Fusion4D: real-time performance capture of challenging scenes
- Computer ScienceACM Trans. Graph.
- 2016
This work contributes a new pipeline for live multi-view performance capture, generating temporally coherent high-quality reconstructions in real-time, highly robust to both large frame-to-frame motion and topology changes, allowing us to reconstruct extremely challenging scenes.
ArticulatedFusion: Real-time Reconstruction of Motion, Geometry and Segmentation Using a Single Depth Camera
- Computer ScienceECCV
- 2018
This paper proposes a real-time dynamic scene reconstruction method capable of reproducing the motion, geometry, and segmentation simultaneously given live depth stream from a single RGB-D camera.…
SkeletonFusion: Reconstruction and tracking of human body in real-time
- Computer ScienceOptics and Lasers in Engineering
- 2018
MixedFusion: Real-Time Reconstruction of an Indoor Scene with Dynamic Objects
- Computer ScienceIEEE Transactions on Visualization and Computer Graphics
- 2018
This paper develops an end-to-end system using a depth sensor to scan a scene on the fly and proposes a Sigmoid-based Iterative Closest Point (S-ICP) method, which fuses the geometry of both the static and dynamic objects in a scene in real time, which extends the usage of the current techniques for indoor scene reconstruction.
RigidFusion: RGB‐D Scene Reconstruction with Rigidly‐moving Objects
- Computer ScienceComput. Graph. Forum
- 2021
RididFusion is a novel asynchronous moving‐object detection method, combined with a modified volumetric fusion that handles significantly more challenging reconstruction scenarios involving moving camera and improves moving‐ object detection.
Towards Non-rigid Reconstruction - How to Adapt Rigid RGB-D Reconstruction to Non-rigid Movements?
- Computer ScienceVISIGRAPP
- 2017
A novel algorithm is proposed to extend existing rigid RGB-D reconstruction pipelines to handle non-rigid transformations to store in addition to the model also the non- Rigid transformation nrt of the current frame as a sparse warp field in the image space.
Deformable 3D Fusion: From Partial Dynamic 3D Observations to Complete 4D Models
- Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
A template-less 4D reconstruction method that incrementally fuses highly-incomplete 3D observations of a deforming object, and generates a complete, temporally-coherent shape representation of the object.
DefSLAM: Tracking and Mapping of Deforming Scenes From Monocular Sequences
- Materials ScienceIEEE Transactions on Robotics
- 2021
This article presents DefSLAM, the first monocular SLAM capable of operating in deforming scenes in real time, which intertwines Shape-from-Template (SfT) and Non-Rigid Structure- from-Motion (NRSfM) techniques to deal with the exploratory sequences typical of SLAM.
PoseFusion2: Simultaneous Background Reconstruction and Human Shape Recovery in Real-time
- Computer Science2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2021
This work presents a fast, learning-based human object detector to isolate the dynamic human objects and realise a real-time dense background reconstruction framework and goes further by estimating and reconstructing the human pose and shape.
References
SHOWING 1-10 OF 39 REFERENCES
KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera
- PhysicsUIST
- 2011
Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction, to enable real-time multi-touch interactions anywhere.
KinectFusion: Real-time dense surface mapping and tracking
- Computer Science2011 10th IEEE International Symposium on Mixed and Augmented Reality
- 2011
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.…
Real-time non-rigid reconstruction using an RGB-D camera
- Computer ScienceACM Trans. Graph.
- 2014
A combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time, an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms.
Scanning and tracking dynamic objects with commodity depth cameras
- Computer Science2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)
- 2013
The quality of the model improves dramatically by fusing a sequence of noisy and incomplete depth data of human and that by deforming this fused model to later observations, noise-and-hole-free 3D models are generated for the human moving freely.
Temporally enhanced 3D capture of room-sized dynamic scenes with commodity depth cameras
- Computer Science2014 IEEE Virtual Reality (VR)
- 2014
This paper introduces a system to capture the enhanced 3D structure of a room-sized dynamic scene with commodity depth cameras such as Microsoft Kinects and incorporates temporal information to achieve a noise-free and complete 3D capture of the entire room.
Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data
- Computer ScienceTOGS
- 2009
The reconstruction framework is based upon a novel topology-aware adaptive subspace deformation technique that allows handling long sequences with complex geometry efficiently and accesses data in multiple sequential passes, so that long sequences can be streamed from hard disk, not being limited by main memory.
Robust single-view geometry and motion reconstruction
- Computer ScienceACM Trans. Graph.
- 2009
The method makes use of a smooth template that provides a crude approximation of the scanned object and serves as a geometric and topological prior for reconstruction that allows faithful recovery of small-scale shape and motion features leading to a high-quality reconstruction.
Animation cartography—intrinsic reconstruction of shape and motion
- Computer ScienceTOGS
- 2012
This article considers the problem of animation reconstruction, that is, the reconstruction of shape and motion of a deformable object from dynamic 3D scanner data, without using user-provided template models, and proposes a number of algorithmic building blocks that can handle fast motion, temporally disrupted input, and correctly match objects that disappear for extended time periods in acquisition holes due to occlusion.
Space-time surface reconstruction using incompressible flow
- Computer ScienceSIGGRAPH 2008
- 2008
A volumetric space-time technique for the reconstruction of moving and deforming objects from point data that optimization so that the distance material moves from one time frame to the next is bounded, the density of material remains constant, and the object remains compact.
Reconstruction of deforming geometry from time-varying point clouds
- Computer ScienceSymposium on Geometry Processing
- 2007
A system for the reconstruction of deforming geometry from a time sequence of unstructured, noisy point clouds, as produced by recent real-time range scanning devices, is described, capable of robustly retrieving animated models with correspondences from data sets suffering from significant noise, outliers and acquisition holes.