Learn More
We propose a novel method for the multi-view reconstruction problem. Surfaces which do not have direct support in the input 3D point cloud and hence need not be photo-consistent but represent real parts of the scene (e.g. low-textured walls, windows, cars) are important for achieving complete reconstructions. We augmented the existing Labatut CGF 2009(More)
We analyze Kinect as a 3D measuring device, experimentally investigate depth measurement resolution and error properties and make a quantitative comparison of Kinect accuracy with stereo reconstruction from SLR cameras and a 3D-TOF camera. We propose Kinect geometrical model and its calibration procedure providing an accurate calibration of Kinect 3D(More)
This paper presents a scalable multi-view stereo reconstruction method which can deal with a large number of large unorganized images in affordable time and effort. The computational effort of our technique is a linear function of the surface area of the observed scene which is conveniently discretized to represent sufficient but not excessive detail. Our(More)
We present a wearable audio-visual capturing system, termed AWEAR 2.0, along with its underlying vision components that allow robust self-localization, multi-body pedestrian tracking, and dense scene reconstruction. Designed as a backpack, the system is aimed at supporting the cognitive abilities of the wearer. In this paper, we focus on the design issues(More)
We present a multi-view stereo method that avoids producing hallucinated surfaces which do not correspond to real surfaces. Our approach to 3D reconstruction is based on the minimal s-t cut of the graph derived from the Delaunay tetrahedralization of a dense 3D point cloud, which produces watertight meshes. This is often a desirable property but it(More)
We present a novel method for 3D surface reconstruction from an input cloud of 3D points augmented with visibility information. We observe that it is possible to reconstruct surfaces that do not contain input points. Instead of modeling the surface from input points, we model free space from visibility information of the input points. The complement of the(More)
CMP SfM Web Service is a remote procedure call service operated at the Center of Machine Perception (CMP) of the Czech Technical University in Prague. The majority of available procedures are implementations of Computer Vision methods developed at CMP. The service can be accessed through web page and command line scripting interfaces. This paper presents(More)