Etienne Grossmann

Learn More
We present a method to reconstruct from one or more images a scene that is rich in planes, alignments, symmetries, orthogonalities, and other forms of geometrical regularity. Given image points of interest and some geometric information, the method recovers least-squares estimates of the 3D points, camera position(s), orientation(s), and eventually(More)
This paper considers the problem of 3D reconstruction from 2D points in one or more images and auxiliary information about the corresponding 3D features : alignments, coplanarities, ratios of lengths or symmetries are known. Our first contribution is a necessary and sufficient criterion that indicates whether a dataset, or subsets thereof, defines a rigid(More)
We consider the problem of estimating the relative orientation of a number of individual photocells -or pixelsthat hold fixed relative positions. The photocells measure the intensity of light traveling on a pencil of lines. We assume that the light-field thus sampled is changing, e.g. as the result of motion of the sensors and use the obtained measurements(More)
This paper introduces a new video surveillance dataset that was captured by a network of synchronized cameras placed throughout an indoor setting and augmented with groundtruth data. The dataset includes ten minutes of footage of individuals who are moving throughout the sensor network. In addition, three scripted scenarios that contain behaviors exhibited(More)
This dissertation presents a novel methodology for vision-based robot navigation. One of the key observations is that navigation systems should be designed through a holistic approach, encompassing aspects of sensor design, choice of adequate spatial representations with associated global localisation and local control schemes. We tackle a number of design(More)
Vision is an extraordinarily powerful sense. The ability to perceive the environment allows for movement to be regulated by the world. Humans do this effortlessly but still lack the understanding of how perception works. In the case of visual perception, many researchers, from psychologists to engineers, are working on this complex problem. Our approach is(More)
We propose a method for 3D reconstruction of structured environments from a single omnidirectional image. It is based on a reduced amount of user information, in the form of 2D pixel coordinates, alignment and coplanarity properties amongst subsets of the corresponding 3D points. Just a few panoramic images are sufficient for building the 3D model, as(More)
In this paper, we develop a theory of non-parametric self-calibration. Recently, schemes have been devised for non-parametric laboratory calibration, but not for self-calibration. We allow an arbitrary warp to model the intrinsic mapping, with the only restriction that the camera is central and that the intrinsic mapping has a well-defined non-singular(More)