Matthias Zobel

Learn More
Active object tracking, for example, in surveillance tasks, becomes more and more important these days. Besides the tracking algorithms themselves methodologies have to be developed for reasonable active control of the degrees of freedom of all involved cameras. In this paper we present an information theoretic approach that allows the optimal selection of(More)
MOBSY is a fully integrated autonomous mobile service robot system. It acts as an automatic dialogue based receptionist for visitors of our institute. MOBSY incorporates many techniques from different research areas into one working stand-alone system. Especially the computer vision and dialogue aspects are of main interest from the pattern recognition's(More)
In this paper we present an information theoretic framework that provides an optimality criterion for the selection of the best sensor data regarding state estimation of dynamic system. One relevant application in practice is tracking a moving object in 3–D using multiple sensors. Our approach extends previous and similar work in the area of active object(More)
We describe a method for selecting optimal actions affecting the sensors in a probabilistic state estimation framework, with an application in selecting optimal zoom levels for a motor-controlled camera in an object tracking task. The action is selected to minimize the expected entropy of the state estimate. The contribution of this paper is the ability to(More)
Geometric object models have been widely used for visual object tracking. In this contribution we present particle filter based object tracking with pose estimation using an appearance based light-field object model. A light-field is an image-based object representation which can be used to render a photo realistic view of an arbitrarily shaped object from(More)
We present a vision-based robotic system which uses a combination of several active sensing strategies to grip a free-standing small target object with an initially unknown position and orientation. The object position is determined and maintained with a probabilistic visual tracking system. The cameras on the robot contain a motorized zoom lens, allowing(More)
We present a modular architecture for image understanding and active computer vision which consists of the following major components: Sensor and actor interfaces required for data–driven active vision are encapsulated to hide machine–dependent parts; image segmentation is implemented in object–oriented programming as a hierarchy of image operator classes,(More)
Plenoptic models, representatives are the lightfield or the lumigraph, have been successfully applied in computer vision and computer graphics in the past five years. The key idea is to model objects and scenes using images and some extra information like camera parameters or coarse geometry. The model differs from CAD–models in the photorealism that can be(More)