Khurram Shafique

Learn More
Conventional tracking approaches assume proximity in space, time and appearance of objects in successive observations. However, observations of objects are often widely separated in time and space when viewed from multiple non-overlapping cameras. To address this problem, we present a novel approach for establishing object correspondence across(More)
Tracking across cameras with non-overlapping views is a challenging problem. Firstly, the observations of an object are often widely separated in time and space when viewed from non-overlapping cameras. Secondly, the appearance of an object in one camera view might be very different from its appearance in another camera view due to the differences in(More)
When viewed from a system of multiple cameras with non-overlapping fields of view, the appearance of an object in one camera view is usually very different from its appearance in another camera view due to the differences in illumination, pose and camera parameters. In order to handle the change in observed colors of an object as it moves from one camera to(More)
We present a background subtraction method that uses multiple cues to robustly detect objects in adverse conditions. The algorithm consists of three distinct levels i.e pixel level, region level and frame level. At the pixel level, statistical models of gradients and color are separately used to classify each pixel as belonging to background or foreground.(More)
This work presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general(More)
In this paper, we propose a robust approach for tracking targets in forward looking infrared (FLIR) imagery taken from an airborne moving platform. First, the targets are detected using fuzzy clustering, edge fusion and local texture energy. The position and the size of the detected targets are then used to initialize the tracking algorithm. For each(More)
We propose a novel method to model and learn the scene activity, observed by a static camera. The proposed model is very general and can be applied for solution of a variety of problems. The motion patterns of objects in the scene are modeled in the form of a multivariate nonparametric probability density function of spatiotemporal variables (object(More)
The mapping that relates the image irradiance to the image brightness (intensity) is known as the Radiometric Response Function or Camera Response Function. This usually unknown mapping is nonlinear and varies from one color channel to another. In this paper, we present a method to estimate the radiometric response functions (of R, G and B channels) of a(More)
A defensive k−alliance in a graph G = (V, E) is a set of vertices A ⊆ V such that for every vertex v ∈ A, the number of neighbors v has in A is at least k more than the number of neighbors it has in V −A (where k is the strength of defensive k−alliance). An offensive k−alliance is a set of vertices A ⊆ V such that for every vertex v ∈ ∂A, the number of(More)
This paper presents a novel object-based video coding framework for videos obtained from a static camera. As opposed to most existing methods, the proposed method does not require explicit 2D or 3D models of objects and hence is general enough to cater for varying types of objects in the scene. The proposed system detects and tracks objects in the scene and(More)