Learn More
At the first ICVS, we presented SA-C (" sassy "), a single-assignment variant of the C programming language designed to exploit both coarse-grain and fine-grain parallelism in computer vision and image processing applications. This paper presents a new optimizing compiler that maps SA-C source code onto field programma-ble gate array (FPGA) configurations.(More)
This paper presents a high-level language for expressing image processing algorithms, and an optimizing compiler that targets FPGAs. The language is called SA-C, and this paper focuses on the language features that 1) support image processing, and 2) enable efficient compilation to FPGAs. It then describes the compilation process, in which SA-C algorithms(More)
—This paper presents a novel patch-based approach for object tracking robust to partial and short-time total occlu-sions. Initially, the original template is divided into rectangular subregions (patches), and each patch is tracked independently. The displacement of the whole template is obtained using a weighted vector median filter that combines the(More)
The number of features that can be computed over an image is, for practical purposes, limitless. Unfortunately, the number of features that can be computed and exploited by most computer vision systems is considerably less. As a result, it is important to develop techniques for selecting features from very large data sets that include many irrelevant or(More)
Optimal results for the Traveling Salesrep Problem have been reported on problems with up to 3038 cities using a GA with Edge Assembly Crossover (EAX). This paper rst attempts to independently replicate these results on Padberg's 532 city problem. We then evaluate the performance contribution of the various algorithm components. The incorporation of 2-opt(More)
This paper compares two solutions for human-like perception using two different modular " plug-and-play " frameworks, CAVIAR (List et al, 2005) and Psyclone (Thórisson et al, 2004, 2005a). Each uses a central point of configuration and requires the modules to be auto-descriptive, auto-critical and auto-regulative (Crowley and Reignier, 2003) for fully(More)
This paper describes a task-independent controller that allows for an easy implementation of vision systems for processing video sequences. The controller does not have a fixed dataflow or any fixed steps. The dataflow is constructed by the modules by describing themselves for the controller. During operation the modules and their parameters are selected(More)
This paper presents an architecture for cognitive analysis of streaming video, in which a new module can easily be plugged in, to add to or even compete with existing functionality. This allows the implementers to focus on the key scientific issues instead of struggling with the details of the implementation. The architecture is distributed and runs(More)