3D Scene Reconstruction by Integration of Photometric and Geometric Methods


In this thesis, we have developed a framework for image-based 3D reconstruction of sparse point clouds and dense depth maps. The framework is based on self-consistent integration of geometric and photometric constraints on the surface shape, such as triangulation, defocus and reflectance. The reconstruction of point clouds starts by tracking object features over a range of distances from the camera with a small depth of field, leading to a varying degree of defocus for each feature. Information on absolute depth is obtained based on a Depth from Defocus approach. The parameters of the point spread functions estimated by Depth from Defocus are used as a regularisation term for Structure from Motion. The reprojection error obtained from bundle adjustment and the absolute depth error obtained from Depth from Defocus are simultaneously minimised for all tracked object features. The proposed method yields absolutely scaled 3D coordinates of the scene points without any prior knowledge about either scene structure or the camera motion. Another part of the framework is the estimation of dense depth maps based on intensity and polarisation reflectance and absolute depth data from arbitrary sources, eg. the Structure from Motion and Defocus method. The proposed technique performs the analysis on any combination of single or multiple intensity and polarisation images. To compute the surface gradients, we present a global optimisation method based on a variational framework and a local optimisation method based on solving a set of nonlinear equations individually for each image pixel. These approaches are suitable for strongly non-Lambertian surfaces and those of diffuse reflectance behaviour and can also be adapted to surfaces of non-uniform albedo. We describe how independently measured absolute depth data is integrated into the Shape from Photopolarimetric Reflectance (SfPR) framework in order to increase the accuracy of the 3D reconstruction result. We evaluate the proposed framework on both synthetic and real-world data. The Structure from Motion and Defocus algorithm yields relative errors of absolute scale of usually less than 3 percent. In our real-world experiments with SfPR, we regard the scenarios of 3D reconstruction of raw forged iron surfaces in the domain of industrial quality inspection and the generation of a digital elevation model of a section of the lunar surface. The obtained depth accuracy is better than the lateral pixel resolution.

Cite this paper

@inproceedings{dAngelo20073DSR, title={3D Scene Reconstruction by Integration of Photometric and Geometric Methods}, author={Pablo d’Angelo and Jingping Liu and Gerhard Sagerer}, year={2007} }