Visual Tracking via Adaptive Tracker Selection with Multiple Features

  title={Visual Tracking via Adaptive Tracker Selection with Multiple Features},
  author={Ju Hong Yoon and Du Yong Kim and Kuk-jin Yoon},
In this paper, a robust visual tracking method is proposed to track an object in dynamic conditions that include motion blur, illumination changes, pose variations, and occlusions. To cope with these challenges, multiple trackers with different feature descriptors are utilized, and each of which shows different level of robustness to certain changes in an object's appearance. To fuse these independent trackers, we propose two configurations, tracker selection and interaction. The tracker… 

Interacting Multiview Tracker

Experimental results on benchmark datasets demonstrate that the proposed interacting multiview algorithm performs robustly and favorably against state-of-the-art methods in terms of several quantitative metrics.

Adaptive Updating Probabilistic Model for Visual Tracking

An adaptive updating probabilistic model is proposed to track an object in real-world environment that includes motion blur, illumination changes, pose variations, and occlusions and adaptively updates tracker with the searching and updating process.


Experimental results on benchmark datasets demonstrate that the proposed interacting multiview algorithm performs robustly and favorably against state-of-the-art methods in terms of several quantitative metrics.

Tracking using Numerous Anchor Points

An online adaptive model-free tracker is proposed to track single objects in video sequences to deal with real-world tracking challenges like low-resolution, object deformation, occlusion and motion blur and uses pairwise distance measure to cope with scale variations.

A Robust Visual Tracker with a Coupled-Classifier Based on Multiple Representative Appearance Models

The novel tracker proposed in this paper, by explicit inference, can reduce drift and handle frequent and drastic appearance variation of the target with cluttered background, which is demonstrated by the extensive experiments.

Self-correcting Bayesian target tracking

This thesis proposes Track-EvaluateCorrect framework (self-correlation) for existing trackers in order to achieve a robust tracking and presents a generic representation and formulation of the self-correcting tracking for Bayesian trackers using a Dynamic Bayesian Network (DBN).

Short-Term Visual Object Tracking in Real-Time

A novel tracking framework HMMTxD that fuses multiple tracking methods together with a proposed feature-based online detector and a framework for fusion of multiple trackers and detector and contributions to the problem of tracker evaluation within the Visual Object Tracking (VOT) initiative are proposed.

Automatic tracker selection w.r.t object detection performance

The object detection is improved using Kanade-Lucas-Tomasi feature tracking and for each mobile object, an appropriate tracker is selected among a KLT-based Tracker and a discriminative appearance-based tracker.

Automatic Parameter Adaptation for Multi-object Tracking

The experimental results show that the proposed approach to adapt the tracker parameters to the context variations outperforms the recent trackers in state of the art.

A Distilled Model for Tracking and Tracker Fusion

This paper proposes a novel tracking methodology that takes advantage of other visual trackers, offline and online, and shows that the proposed algorithms compete with state-of-the-art trackers while running in real-time.



Visual tracking decomposition

  • Junseok KwonKyoung Mu Lee
  • Computer Science
    2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • 2010
We propose a novel tracking algorithm that can work robustly in a challenging scenario such that several kinds of appearance and motion changes of an object occur at the same time. Our algorithm is

Probabilistic Color and Adaptive Multi-Feature Tracking with Dynamically Switched Priority Between Cues

A probabilistic multi-cue tracking approach constructed by employing a novel randomized template tracker and a constant color model based particle filter which possesses the ability to adapt to changing object appearances.

Robust visual tracking using ℓ1 minimization

  • Xue MeiHaibin Ling
  • Computer Science
    2009 IEEE 12th International Conference on Computer Vision
  • 2009
This paper proposes a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework and introduces a dynamic template update scheme that keeps track of the most representative templates throughout the tracking procedure.

A Unified Bayesian Framework for Adaptive Visual Tracking

A unified generative model for multi-sensory adaptive tracking which cleanly integrates tracking and the modeling of appearance change across multiple features in the same framework is derived.

Visual tracking with online Multiple Instance Learning

It is shown that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks.

Incremental Learning for Robust Visual Tracking

A tracking method that incrementally learns a low-dimensional subspace representation, efficiently adapting online to changes in the appearance of the target, and includes a method for correctly updating the sample mean and a forgetting factor to ensure less modeling power is expended fitting older observations.

Object tracking: A survey

The goal of this article is to review the state-of-the-art tracking methods, classify them into different categories, and identify new trends to discuss the important issues related to tracking including the use of appropriate image features, selection of motion models, and detection of objects.

Learning to track with multiple observers

It is shown that for face tracking with a handheld camera and hand tracking for gesture interaction combining a small number of observers in a sequential cascade results in efficient algorithms that are both robust and precise.

Dependent Multiple Cue Integration for Robust Tracking

A new technique for fusing multiple cues to robustly segment an object from its background in video sequences that suffer from abrupt changes of both illumination and position of the target, which proves to be much more effective in terms of accuracy and reliability.

PROST: Parallel robust online simple tracking

This work shows that augmenting an on-line learning method with complementary tracking approaches can lead to more stable results, and uses a simple template model as a non-adaptive and thus stable component, a novel optical-flow-based mean-shift tracker as highly adaptive element and anon-line random forest as moderately adaptive appearance-based learner.