Superpixel Soup: Monocular Dense 3D Reconstruction of a Complex Dynamic Scene

@article{Kumar2021SuperpixelSM,
  title={Superpixel Soup: Monocular Dense 3D Reconstruction of a Complex Dynamic Scene},
  author={Suryansh Kumar and Yuchao Dai and Hongdong Li},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  volume={43},
  pages={1705-1717}
}
This work addresses the task of dense 3D reconstruction of a complex dynamic scene from images. The prevailing idea to solve this task is composed of a sequence of steps and is dependent on the success of several pipelines in its execution. To overcome such limitations with the existing algorithm, we propose a unified approach to solve this problem. We assume that a dynamic scene can be approximated by numerous piecewise planar surfaces, where each planar surface enjoys its own rigid motion… Expand
A novel no-sensors 3D model reconstruction from monocular video frames for a dynamic environment
TLDR
A new framework that builds a full 3D model reconstruction that overcomes the occlusion problem in a complex dynamic scene without using sensors’ data is proposed and compared with different widely used state-of-the-art evaluation methods. Expand
Novel View Synthesis from only a 6-DoF Camera Pose by Two-stage Networks
TLDR
This work synthesizes the novel view from only a 6-DoF camera pose directly, and proposes a two-stage learning strategy, which consists of two consecutive CNNs: GenNet and RefineNet, which decouple the geometric relationship between mapping and texture detail rendering. Expand
Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo
TLDR
This work suitably exploits the image formation model in a MVPS experimental setup to recover the dense 3D reconstruction of an object from images to perform neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network. Expand
Editable free-viewpoint video using a layered neural representation
TLDR
This paper proposes the first approach for editable free-viewpoint video generation for large-scale view-dependent dynamic scenes using only 16 cameras using a new layered neural representation called ST-NeRF, which achieves the disentanglement of location, deformation as well as the appearance of the dynamic entity in a continuous and self-supervised manner. Expand
Editable free-viewpoint video using a layered neural representation
Generating free-viewpoint videos is critical for immersive VR/AR experience, but recent neural advances still lack the editing ability to manipulate the visual perception for large dynamic scenes. ToExpand
A Closed-Form Solution to Local Non-Rigid Structure-from-Motion
TLDR
This paper shows that, under widely applicable assumptions, a new system of equation is derived in terms of the surface normals whose two solutions can be obtained in closed-form and can easily be disambiguated locally. Expand
Neural Architecture Search for Efficient Uncalibrated Deep Photometric Stereo
TLDR
This work uses differentiable neural architecture search (NAS) strategy to find uncalibrated PS architecture automatically, and defines a discrete search space for a light calibration network and a normal estimation network, respectively. Expand
Non-Rigid Structure from Motion: Prior-Free Factorization Method Revisited
  • Suryansh Kumar
  • Computer Science
  • 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2020
TLDR
Some of the hidden intricacies missed by Dai et al. work are explored and it is argued that by properly utilizing the well-established assumptions about a non-rigidly deforming shape, the simple prior-free idea can provide results which is comparable to the best available algorithms. Expand
Uncalibrated Neural Inverse Rendering for Photometric Stereo of General Surfaces
TLDR
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem and explicitly models the concave and convex parts of a complex surface to consider the effects of interreflections in the image formation process. Expand
A Unified Optimization Framework for Low-Rank Inducing Penalties
TLDR
This paper is able to unify two important classes of regularizers from unbiased non-convex formulations and weighted nuclear norm penalties, and shows that the proposed regularizers can be incorporated in standard splitting schemes such as Alternating Direction Methods of Multipliers (ADMM), and other sub-gradient methods. Expand

References

SHOWING 1-10 OF 65 REFERENCES
Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames
This paper proposes a new approach for monocular dense 3D reconstruction of a complex dynamic scene from two perspective frames. By applying superpixel over-segmentation to the image, we model aExpand
Video Pop-up: Monocular 3D Reconstruction of Dynamic Scenes
TLDR
An unsupervised approach to the challenging problem of simultaneously segmenting the scene into its constituent objects and reconstructing a 3D model of the scene and evaluating the motion segmentation functionality of the approach on the Berkeley Motion Segmentation Dataset. Expand
3D Scene Flow Estimation with a Piecewise Rigid Scene Model
TLDR
This work proposes to represent the dynamic scene as a collection of rigidly moving planes, into which the input images are segmented, and shows that a view-consistent multi-frame scheme significantly improves accuracy, especially in the presence of occlusions, and increases robustness against adverse imaging conditions. Expand
Dense Monocular Depth Estimation in Complex Dynamic Scenes
TLDR
A novel motion segmentation algorithm is provided that segments the optical flow field into a set of motion models, each with its own epipolar geometry, and it is shown that the scene can be reconstructed based on these motion models by optimizing a convex program. Expand
Template-free monocular reconstruction of deformable surfaces
TLDR
A local deformation model is used to fit a triangulated mesh to the 3D point cloud, which makes the reconstruction robust to both noise and outliers in the image data. Expand
Piecewise Quadratic Reconstruction of Non-Rigid Surfaces from Monocular Sequences
TLDR
This paper presents a new method for the 3D reconstruction of highly deforming surfaces viewed by a single orthographic camera that divides the surface into overlapping patches, reconstructs each of these patches individually using a quadratic deformation model and finally registers them imposing the constraint that points shared by patches must correspond to the same 3D points in space. Expand
3D scene flow estimation with a rigid motion prior
TLDR
This work derives a local rigidity constraint of the 3D scene flow and defines a smoothness term that penalizes deviations from that constraint, thus favoring solutions that consist largely of rigidly moving parts. Expand
Direct, Dense, and Deformable: Template-Based Non-rigid 3D Reconstruction from RGB Video
TLDR
This paper first compute a dense 3D template of the shape of the object, using a short rigid sequence, and subsequently perform online reconstruction of the non-rigid mesh as it evolves over time, which minimizes a robust photometric cost. Expand
Grouping-Based Low-Rank Trajectory Completion and 3D Reconstruction
TLDR
This paper proposes a method that combines dense optical flow tracking, motion trajectory clustering and NRSfM for 3D reconstruction of objects in videos, and is the first to extract dense object models from realistic videos, such as those found in Youtube or Hollywood movies, without object-specific priors. Expand
Dense Variational Reconstruction of Non-rigid Surfaces from Monocular Video
TLDR
This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence and reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates. Expand
...
1
2
3
4
5
...