3-D Reconstruction from Sparse Views using Monocular Vision

@article{Saxena20073DRF,
  title={3-D Reconstruction from Sparse Views using Monocular Vision},
  author={Ashutosh Saxena and Min Sun and A. Ng},
  journal={2007 IEEE 11th International Conference on Computer Vision},
  year={2007},
  pages={1-8}
}
We consider the task of creating a 3-d model of a large novel environment, given only a small number of images of the scene. This is a difficult problem, because if the images are taken from very different viewpoints or if they contain similar-looking structures, then most geometric reconstruction methods will have great difficulty finding good correspondences. Further, the reconstructions given by most algorithms include only points in 3-d that were observed in two or more images; a point… 
Learning 3-D Scene Structure from a Single Still Image
TLDR
This work considers the problem of estimating detailed 3D structure from a single still image of an unstructured environment and uses a Markov random field (MRF) to infer a set of "plane parameters" that capture both the 3D location and 3D orientation of the patch.
Make3D: Depth Perception from a Single Still Image
TLDR
This paper presents algorithms for estimating depth from a single still image, and discusses applications of the depth perception algorithm in robotic navigation, in improving the performance of stereovision, and in creating large-scale 3-d models given only a small number of images.
Construction of 3D models from single view images: A survey based on various approaches
  • S. Mohan, L. Mani
  • Computer Science
    2011 International Conference on Emerging Trends in Electrical and Computer Technology
  • 2011
TLDR
A discussion of a few well known approaches leading to creation of an approximate 3D model from a single still image and the importance of global image features were understood.
Combining recognition and geometry for data-driven 3D reconstruction
TLDR
The concept of the Shape Anchor is introduced, a region for which the combination of recognition and multiple view geometry allows us to accurately predict the latent, dense point cloud.
Scene Reconstruction from Multiple Images with Single Centre of Projection
TLDR
The focus here is on the unique problem of refining visual reconstruction which yields camera pose and calibration, and three-dimensional structure estimate when multiple views have a single center of projection.
3-D Reconstruction of Image Structures Based-on Double Optical Paths Microscopy
Abstrac t—This paper presents a method reconstructing 3-d image structures with data collected from double optical paths microscopy. A color image gathered is divided into two monochromatic images
Efficient and robust algorithms for sparse 3-D reconstruction from image sequences
TLDR
This thesis focuses on algorithms for sparse 3-D reconstruction, which represent the scene structure with a limited number of feature points, and proposes several enhancements to increase its efficiency and robustness, like an efficient hierarchical feature selection strategy, a new result propagation strategy for hierarchical translation estimation, and the efficient integration of an affine linear model for intensity equalization.
Unsupervised reconstruction of a Visual Hull in space, time and light domains
TLDR
An unsupervised image segmentation approach for obtaining a set of silhouettes along with the Visual Hull of an object observed from multiple viewpoints that allows for robust Visual Hull reconstruction of a variety of challenging objects such as objects made of shiny metal or glass.
Multi-view Superpixel Stereo in Man-made Environments
TLDR
This work formulate the problem of the 3D reconstruction in MRF framework built on an image presegmented into superpixels and proposes novel robust cost measures, which overcome many difficulties of standard pixel-based formulations and handles favorably problematic scenarios containing many repetitive structures and no or low textured regions.
Multi-view Superpixel Stereo in Urban Environments
TLDR
This work proposes novel photometric and superpixel boundary consistency terms explicitly derived from superpixels and shows that they overcome many difficulties of standard pixel-based formulations and handle favorably problematic scenarios containing many repetitive structures and no or low textured regions.
...
1
2
3
4
...

References

SHOWING 1-10 OF 31 REFERENCES
Learning 3-D Scene Structure from a Single Still Image
TLDR
This work considers the problem of estimating detailed 3D structure from a single still image of an unstructured environment and uses a Markov random field (MRF) to infer a set of "plane parameters" that capture both the 3D location and 3D orientation of the patch.
Automatic Single-Image 3d Reconstructions of Indoor Manhattan World Scenes
TLDR
This paper uses a Markov random field model to identify the different planes and edges in the scene, as well as their orientations, and applies an iterative optimization algorithm to infer the most probable position of all the planes, and thereby obtain a 3d reconstruction.
3-D Depth Reconstruction from a Single Still Image
TLDR
This work proposes a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
A Dynamic Bayesian Network Model for Autonomous 3D Reconstruction from a Single Indoor Image
  • E. Delage, Honglak Lee, A. Ng
  • Computer Science
    2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)
  • 2006
TLDR
This paper presents a dynamic Bayesian network model capable of resolving some of the ambiguities of monocular vision and recovering 3d information for many images and shows that this model can be used for 3d reconstruction from a single image.
Learning Depth from Single Monocular Images
TLDR
This work begins by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps, and applies supervised learning to predict the depthmap as a function of the image.
Geometric context from a single image
TLDR
This work shows that it can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes, and provides a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label.
Depth Estimation Using Monocular and Stereo Cues
TLDR
This paper shows that by adding monocular cues to stereo (triangulation) ones, it is shown that significantly more accurate depth estimates are obtained than is possible using either monocular or stereo cues alone.
Single View Metrology
TLDR
An algebraic representation is developed which unifies the three types of measurement and permits a first order error propagation analysis to be performed, associating an uncertainty with each measurement.
Visual Modeling with a Hand-Held Camera
TLDR
A complete system to build visual models from camera images is presented and a combined approach with view-dependent geometry and texture is presented, as an application fusion of real and virtual scenes is also shown.
Depth Estimation from Image Structure
TLDR
It is demonstrated that, by recognizing the properties of the structures present in the image, one can infer the scale of the scene and, therefore, its absolute mean depth.
...
1
2
3
4
...