SurfelMeshing: Online Surfel-Based Mesh Reconstruction

@article{Schps2020SurfelMeshingOS,
  title={SurfelMeshing: Online Surfel-Based Mesh Reconstruction},
  author={Thomas Sch{\"o}ps and Torsten Sattler and Marc Pollefeys},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2020},
  volume={42},
  pages={2494-2507}
}
We address the problem of mesh reconstruction from live RGB-D video, assuming a calibrated camera and poses provided externally (e.g., by a SLAM system. [] Key Method This is possible by deforming the surfel cloud and asynchronously remeshing the surface where necessary. The surfel-based representation also naturally supports strongly varying scan resolution. In particular, it reconstructs colors at the input camera's resolution. Moreover, in contrast to many volumetric approaches, ours can reconstruct thin…

Gradient-SDF: A Semi-Implicit Surface Representation for 3D Reconstruction

TLDR
The proposed Gradient-SDF represents a novel representation for 3D geometry that combines the advantages of implict and explicit representations and is equally suited for (GPU) parallelization as related approaches.

HRBF-Fusion: Accurate 3D Reconstruction from RGB-D Data Using On-the-fly Implicits

Reconstruction of high-fidelity 3D objects or scenes is a fundamental research problem. Recent advances in RGB-D fusion have demonstrated the potential of producing 3D models from consumer-level

Atlas: End-to-End 3D Scene Reconstruction from Posed Images

TLDR
An end-to-end 3D reconstruction method for a scene by directly regressing a truncated signed distance function (TSDF) from a set of posed RGB images is presented and semantic segmentation of the 3D model is obtained without significant computation.

Scalable Point Cloud-based Reconstruction with Local Implicit Functions

TLDR
A hierarchical feature map in 3D space is used, extracted from the input point clouds, with which local latent shape encodings can be queried at arbitrary positions and enables accurate and detailed point cloud-based reconstructions for large amounts of points in a time-efficient manner.

Directional TSDF: Modeling Surface Orientation for Coherent Meshes

TLDR
This work proposes the directional TSDF—a novel representation that stores opposite surfaces separate from each other that outperforms state-of-the-art TSDF reconstruction algorithms in mesh accuracy and increases the accuracy by using surface gradient-based ray casting for fusing new measurements.

Gaussian Fusion: Accurate 3D Reconstruction via Geometry-Guided Displacement Interpolation

Reconstructing delicate geometric details with consumer RGB-D sensors is challenging due to sensor depth and poses uncertainties. To tackle this problem, we propose a unique geometry-guided fusion

Final Project Proposal Dense Mapping using Feature Matching and Superpixel Clustering

TLDR
The goal of this project is to reproduce results of Wang et al’s, namely implementing superpixel extraction, surfel initialization, and surfel fusion to generate a surfel-based reconstruction given a camera poses from a sparse SLAM implementation.

Real-Time 3D Reconstruction of Colonoscopic Surfaces for Determining Missing Regions

TLDR
This work is the first to reconstruct dense colon surface from video in real time and to display missing surface, using a novel deep-learning-driven dense SLAM system that can produce a camera trajectory and a dense reconstructed surface for colon chunks.

Multi-sensor large-scale dataset for multi-view 3D reconstruction

We present a new multi-sensor dataset for 3D surface reconstruction. It includes registered RGB and depth data from sensors of different resolutions and modalities: smartphones, Intel RealSense,

BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion

TLDR
This work proposes a novel bi-level fusion strategy that considers both efficiency and reconstruction quality by design, and evaluates the proposed method on multiple datasets quantitatively and qualitatively, demon-strating a sign of improvement over existing methods.

References

SHOWING 1-10 OF 54 REFERENCES

SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction

  • Wei GaoRuss Tedrake
  • Computer Science
    Robotics: Science and Systems
  • 2018
TLDR
A dense SLAM system that takes a live stream of depth images as input and reconstructs non-rigid deforming scenes in real time, without templates or prior models, leading to significantly improved performance and allows robots to maintain a scene description that potentially enables interactions with dynamic working environments.

Volumetric 3D mapping in real-time on a CPU

TLDR
A novel volumetric multi-resolution mapping system for RGB-D images that runs on a standard CPU in real-time and uses an octree as the primary data structure which allows the scene to represent the scene at multiple scales.

Monocular, Real-Time Surface Reconstruction Using Dynamic Level of Detail

TLDR
This work presents a scalable, real-time capable method for robust surface reconstruction that explicitly handles multiple scales, and relies on least-squares optimisation, which enables a probabilistically sound and principled formulation of the fusion algorithm.

Efficient Online Surface Correction for Real-time Large-Scale 3D Reconstruction

TLDR
This work proposes an efficient on-the-fly surface correction method for globally consistent dense 3D reconstruction of large-scale scenes that requires only a single GPU and allows for real-time surface correction of large environments.

Real-Time Large-Scale Dense 3D Reconstruction with Loop Closure

TLDR
This paper proposes an online framework which delivers a consistent 3D model to the user in real time by splitting the scene into submaps, and adjusting the poses of the submaps as and when required.

Comprehensive Use of Curvature for Robust and Accurate Online Surface Reconstruction

TLDR
This work treats curvature as an independent quantity that it consistently incorporate into every stage of the real-time reconstruction pipeline, including densely curvature-weighted ICP, range image fusion, local surface reconstruction, and rendering.

Field-aligned online surface reconstruction

TLDR
The method proposed in this paper is the first to combine the benefits of both offline and online reconstruction of scenes with hundreds of millions of samples from high-resolution sensing modalities such as structured light or laser scanners, enabling a drastically more efficient output-driven interactive scanning and reconstruction workflow.

Scalable real-time volumetric surface reconstruction

TLDR
This work designs a memory efficient, hierarchical data structure for commodity graphics hardware, which supports live reconstruction of large-scale scenes with fine geometric details, and experimentally demonstrates that a shallow hierarchy with relatively large branching factors yields the best memory/speed tradeoff.

Real-time 3D reconstruction at scale using voxel hashing

TLDR
An online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure.

BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface re-integration

TLDR
This work systematically addresses issues with a novel, real-time, end-to-end reconstruction framework, which outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness.
...