KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera

@article{Izadi2011KinectFusionR3,
  title={KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera},
  author={Shahram Izadi and David Kim and Otmar Hilliges and David Molyneaux and Richard A. Newcombe and Pushmeet Kohli and Jamie Shotton and Steve Hodges and Dustin Freeman and Andrew J. Davison and Andrew W. Fitzgibbon},
  journal={Proceedings of the 24th annual ACM symposium on User interface software and technology},
  year={2011}
}
  • S. Izadi, David Kim, +8 authors A. Fitzgibbon
  • Published 16 October 2011
  • Computer Science
  • Proceedings of the 24th annual ACM symposium on User interface software and technology
KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. [...] Key Method Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time…Expand
Real-time 3D Reconstruction Using a Combination of Point-Based and Volumetric Fusion
TLDR
A weighted iterative closest point (ICP) algorithm that uses both depth and RGB information to enhance the stability of camera tracking and segmentation of moving objects with reduced computational complexity is proposed. Expand
Kinect-Based Easy 3D Object Reconstruction
TLDR
The basic idea is to make use of the existing powerful 2D segmentation tool to refine the silhouette in each color image and then form visual hull via the refined dense silhouettes to improve the 3D object model. Expand
Real-time 360 Body Scanning System for Virtual Reality Research Applications
TLDR
The system is composed of a cluster of 10 Microsoft Kinect 2 cameras, each one associated to a compact NUC PC to stream live depth & color images to a master PC which reconstructs live the point cloud of the scene and can in particular show the body of users standing in the capture area. Expand
MonoFusion: Real-time 3D reconstruction of small scenes with a single web camera
TLDR
Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion, making such systems even more accessible. Expand
Real-time 3D scene reconstruction with dynamically moving object using a single depth camera
TLDR
Experimental results show that the proposed single depth camera-based real-time approach can reconstruct moving object as well as static environment with rich details, and outperform conventional methods in multiple aspects. Expand
3D reconstruction based on Kinect
This paper realizes a 3D reconstruction system with Kinect, which can rebuild high-level, geometrically accurate 3D models in real-time with texture feature. Now the data formats, such as .obj, .stl,Expand
KinectFusion: Real-time dense surface mapping and tracking
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.Expand
3D Scene Reconstruction from Depth Camera Data
TLDR
This chapter discusses several approaches targeted to depth cameras, including the KinectFusion approach and its extension to dynamic scenes, and solutions for pre-processing, pairwise, and global registration as well as fusion of views. Expand
MobileFusion: Real-Time Volumetric Surface Reconstruction and Dense Tracking on Mobile Phones
TLDR
This work presents the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones, and qualitatively compares to a state of the art point-based mobile phone method. Expand
Rapid creation of photorealistic virtual reality content with consumer depth cameras
TLDR
This work demonstrates a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
KinectFusion: Real-time dense surface mapping and tracking
We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.Expand
Live dense reconstruction with a single moving camera
TLDR
This work takes point-based real-time structure from motion (SFM) as a starting point, generating accurate 3D camera pose estimates and a sparse point cloud and warp the base mesh into highly accurate depth maps based on view-predictive optical flow and a constrained scene flow update. Expand
Real-time 3D model acquisition
TLDR
A new 3D model acquisition system that permits the user to rotate an object by hand and see a continuously-updated model as the object is scanned, demonstrating the ability of the prototype to scan objects faster and with greater ease than conventional model acquisition pipelines. Expand
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments
TLDR
This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps. Expand
DTAM: Dense tracking and mapping in real-time
TLDR
It is demonstrated that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application is shown. Expand
Dynamic shape capture using multi-view photometric stereo
We describe a system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views. The normal maps are captured using active shape-from-shadingExpand
3D shape scanning with a time-of-flight camera
TLDR
It is shown the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality, and a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics. Expand
In-hand scanning with online loop closure
TLDR
A complete 3D in-hand scanning system that allows users to scan objects by simply turning them freely in front of a real-time 3D range scanner and the online model is of sufficiently high quality to serve as the final model. Expand
Parallel Tracking and Mapping for Small AR Workspaces
TLDR
A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems. Expand
Real-Time Visibility-Based Fusion of Depth Maps
We present a viewpoint-based approach for the quick fusion of multiple stereo depth maps. Our method selects depth estimates for each pixel that minimize violations of visibility constraints and thusExpand
...
1
2
3
4
...