• Corpus ID: 5635581

Autonomic Computer Vision Systems

  title={Autonomic Computer Vision Systems},
  author={James L. Crowley and Daniela Hall and R{\'e}mi Emonet and Inp Grenoble},
For most real applications of computer vision, variations in operating conditions result in poor reliability. As a result, real world applications tend to require lengthy set-up and frequent intervention by qualified specialists. In this paper we describe how autonomic computing can be used to reduce the cost of installation and enhance reliability for practical computer vision systems. We begin by reviewing the origins of autonomic computing. We then describe the design of a tracking-based… 

Figures from this paper

Learning to Adapt: A Method for Automatic Tuning of Algorithm Parameters

In this work, a method is presented for automatically and continuously tuning the parameters of algorithms in a real-time modular vision system that is demonstrated on a three-module people-tracking system for video surveillance.

EPypes: a framework for building event-driven data processing pipelines

EPypes is presented, an architecture and Python-based software framework for developing vision algorithms in a form of computational graphs and their integration with distributed systems based on publish-subscribe communication that facilitates flexibility of algorithm prototyping and provides a structured approach to managing algorithm logic and exposing the developed pipelines as a part of online systems.

Parallel training and testing methods for complex image processing algorithms on distributed, heterogeneous, unreliable, and non-dedicated resources

Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing which are capable of harnessing the available computational power resources.

Crossing Boundaries: Multi-Level Introspection in a Complex Robotic Architecture for Automatic Performance Improvements

This work introduces a novel multi-level introspection framework that can be used to automatically adjust architectural configurations based on the introspection results at the agent, infrastructure and component level and demonstrates the utility in a concrete implementation on a robot.

Context-based selection and execution of robot perception graphs

A robot perception architecture which enables to select and execute at runtime different perception graphs based on monitored context changes and contains a repository of different perception graph configurations suitable for various context conditions is proposed.

Filtering Surveillance Image Streams by Interactive Machine Learning

Experiments show that interactive machine learning helps detect safeguards relevant event while significantly reducing the number of false positives, and addresses filter shaping as a data-driven process that is placed in the hands of many end-users with extensive domain knowledge but no expertise in machine learning.

Integrated vision system for the semantic interpretation of activities where a person handles objects

Visual Sensor Technology for Advanced Surveillance Systems: Historical View, Technological Aspects and Research Activities in Italy

The paper describes the main characteristics of an advanced visual sensor network that directly processes locally acquired digital data, automatically modifies intrinsic and extrinsic parameters to increase the quality of acquired data and automatically selects the best subset of sensors in order to monitor a given moving object in the observed environment.

The OMiSCID 2.0 Middleware: Usage and Experiments in Smart Environments

Its advantages in both development and research projects are demonstrated, illustrating its radical cut down effect in development time, improving software reuse and easing redeployment notably in the context of Wizard of Oz experiments conducted in smart environments.

Using Digital Image Processing to Make an Intelligent Gate

An automatic system for controlling and dominating building gate based on digital image processing is presented, which was applied on various types of vehicles and results obtained were accurate.



The Vision of Autonomic Computing

A 2001 IBM manifesto noted the almost impossible difficulty of managing current and planned computing systems, which require integrating several heterogeneous environments into corporate-wide computing systems that extend into the Internet.

Automatic parameter regulation of perceptual systems

  • D. Hall
  • Computer Science
    Image Vis. Comput.
  • 2006

Knowledge-based control of vision systems

Dynamic configuration of resource-aware services

This paper shows how to provide a shared infrastructure that automates configuration decisions given a specification of the user's task and validates this approach both analytically and by applying it to a representative scenario.

The Imalab method for vision systems

  • A. Lux
  • Computer Science
    Machine Vision and Applications
  • 2004
This method of constructing computer vision systems using a workbench based on a rich extensible toolbox and a general-purpose kernel makes it possible to quickly develop and test new algorithms, simplifies the use and reuse of existing program libraries, and allows to construct a variety of systems to meet particular requirements.

Adaptive image analysis for aerial surveillance

A computer vision system that would automatically monitor its own performance and dynamically adapt to changing situations and requirements is described, based on computational reflection and control system theory.

SOM based algorithm for video surveillance system parameter optimal selection

The aim of the paper is to find a new methodology for video surveillance adaptive parameter regulation based on self organizing maps (SOM) which are used for parameter regulation purposes.

A distributed probabilistic system for adaptive regulation of image processing parameters

A distributed optimization framework and its application to the regulation of the behavior of a network of interacting image processing algorithms are presented. The algorithm parameters used to

Event-based Activity Analysis in Live Video Using a Generic Object Tracker

The embedding of the generic, modular tracker architecture into a distributed infrastructure for visual surveillance applications via an event-based mechanism that generates applicationindependent events on the basis of generic incidents and target interactions detected in the video stream is described.

Integration and control of reactive visual processes

  • J. Crowley
  • Computer Science
    Robotics Auton. Syst.
  • 1994