KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera

@article{Izadi2011KinectFusionR3,
  title={KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera},
  author={Shahram Izadi and David Kim and Otmar Hilliges and David Molyneaux and Richard A. Newcombe and Pushmeet Kohli and Jamie Shotton and Steve Hodges and Dustin Freeman and Andrew J. Davison and Andrew W. Fitzgibbon},
  journal={Proceedings of the 24th annual ACM symposium on User interface software and technology},
  year={2011}
}
  • S. IzadiDavid Kim A. Fitzgibbon
  • Published 16 October 2011
  • Physics
  • Proceedings of the 24th annual ACM symposium on User interface software and technology
KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. [] Key Method Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time…

Real-time 3D Reconstruction Using a Combination of Point-Based and Volumetric Fusion

A weighted iterative closest point (ICP) algorithm that uses both depth and RGB information to enhance the stability of camera tracking and segmentation of moving objects with reduced computational complexity is proposed.

Kinect-Based Easy 3D Object Reconstruction

The basic idea is to make use of the existing powerful 2D segmentation tool to refine the silhouette in each color image and then form visual hull via the refined dense silhouettes to improve the 3D object model.

Real-time 360 Body Scanning System for Virtual Reality Research Applications

The system is composed of a cluster of 10 Microsoft Kinect 2 cameras, each one associated to a compact NUC PC to stream live depth & color images to a master PC which reconstructs live the point cloud of the scene and can in particular show the body of users standing in the capture area.

MonoFusion: Real-time 3D reconstruction of small scenes with a single web camera

Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion, making such systems even more accessible.

Real-time 3D scene reconstruction with dynamically moving object using a single depth camera

Experimental results show that the proposed single depth camera-based real-time approach can reconstruct moving object as well as static environment with rich details, and outperform conventional methods in multiple aspects.

KinectFusion: Real-time dense surface mapping and tracking

We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.

3D reconstruction based on Kinect

  • Lu LiZ. MiaoM. Liang
  • Computer Science
    2014 12th International Conference on Signal Processing (ICSP)
  • 2014
This paper realizes a 3D reconstruction system with Kinect, which can rebuild high-level, geometrically accurate 3D models in real-time with texture feature. Now the data formats, such as .obj, .stl,

3D Scene Reconstruction from Depth Camera Data

This chapter discusses several approaches targeted to depth cameras, including the KinectFusion approach and its extension to dynamic scenes, and solutions for pre-processing, pairwise, and global registration as well as fusion of views.

Rapid creation of photorealistic virtual reality content with consumer depth cameras

This work demonstrates a complete end-to-end pipeline for the capture, processing, and rendering of view-dependent 3D models in virtual reality from a single consumer-grade RGB-D camera.

Optimized KinectFusion Algorithm for 3D Scanning Applications

This paper presents a method to optimize KinectFusion for 3D scanning in the above scenarios and aims to reduce the noise influence on camera pose tracking.
...

References

SHOWING 1-10 OF 36 REFERENCES

KinectFusion: Real-time dense surface mapping and tracking

We present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, using only a moving low-cost depth camera and commodity graphics hardware.

Live dense reconstruction with a single moving camera

This work takes point-based real-time structure from motion (SFM) as a starting point, generating accurate 3D camera pose estimates and a sparse point cloud and warp the base mesh into highly accurate depth maps based on view-predictive optical flow and a constrained scene flow update.

Real-time 3D model acquisition

A new 3D model acquisition system that permits the user to rotate an object by hand and see a continuously-updated model as the object is scanned, demonstrating the ability of the prototype to scan objects faster and with greater ease than conventional model acquisition pipelines.

RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments

This paper presents RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment to achieve globally consistent maps.

DTAM: Dense tracking and mapping in real-time

It is demonstrated that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application is shown.

Dynamic shape capture using multi-view photometric stereo

A system for high-resolution capture of moving 3D geometry, beginning with dynamic normal maps from multiple views, which represent the performance over human-size working volumes at a temporal resolution of 60Hz.

3D shape scanning with a time-of-flight camera

It is shown the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality, and a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

In-hand scanning with online loop closure

A complete 3D in-hand scanning system that allows users to scan objects by simply turning them freely in front of a real-time 3D range scanner and the online model is of sufficiently high quality to serve as the final model.

Parallel Tracking and Mapping for Small AR Workspaces

A system specifically designed to track a hand-held camera in a small AR workspace, processed in parallel threads on a dual-core computer, that produces detailed maps with thousands of landmarks which can be tracked at frame-rate with accuracy and robustness rivalling that of state-of-the-art model-based systems.

Real-Time Visibility-Based Fusion of Depth Maps

We present a viewpoint-based approach for the quick fusion of multiple stereo depth maps. Our method selects depth estimates for each pixel that minimize violations of visibility constraints and thus