The Sixth Visual Object Tracking VOT2018 Challenge Results

  title={The Sixth Visual Object Tracking VOT2018 Challenge Results},
  author={Matej Kristan and Ale{\vs} Leonardis and Jiri Matas and Michael Felsberg and Roman P. Pflugfelder and Luka Cehovin Zajc and Tom{\'a}s Voj{\'i}r and Goutam Bhat and Alan Luke{\vz}i{\vc} and Abdelrahman Eldesokey and Gustavo Javier Fernandez and {\'A}lvaro Garc{\'i}a-Mart{\'i}n and {\'A}lvaro Iglesias-Arias and Aydin Alatan and Abel Gonzalez-Garcia and Alfredo Petrosino and Alireza Memarmoghadam and Andrea Vedaldi and Andrej Muhic and Anfeng He and Arnold W. M. Smeulders and Asanka G. Perera and Bo Li and Boyu Chen and Changick Kim and Changsheng Xu and Changzhen Xiong and Cheng Tian and Chong Luo and Chong Sun and Cong Hao and Daijin Kim and Deepak Mishra and Deming Chen and Dong Wang and Dongyoon Wee and Efstratios Gavves and Erhan Gundogdu and Erik Velasco-Salido and Fahad Shahbaz Khan and Fan Yang and Fei Zhao and Feng Li and Francesco Battistone and George De Ath and Gorthi Rama Krishna Sai Subrahmanyam and Guilherme Sousa Bastos and Haibin Ling and Hamed Kiani Galoogahi and Hankyeol Lee and Haojie Li and Haojie Zhao and Heng Fan and Honggang Zhang and Horst Possegger and Houqiang Li and Huchuan Lu and Hui Zhi and Huiyun Li and Hyemin Lee and Hyung Jin Chang and Isabela Drummond and Jack Valmadre and Jaime Spencer Martin and Javaan Singh Chahl and Jin Young Choi and Jing Li and Jinqiao Wang and Jinqing Qi and Jinyoung Sung and Joakim Johnander and Jo{\~a}o F. Henriques and Jongwon Choi and Joost van de Weijer and Jorge Rodr{\'i}guez Herranz and Jos{\'e} Mar{\'i}a Mart{\'i}nez Sanchez and Josef Kittler and Junfei Zhuang and Junyu Gao and Klemen Grm and Lichao Zhang and Lijun Wang and Lingxiao Yang and Litu Rout and Liu Si and Luca Bertinetto and Lutao Chu and Manqiang Che and Mario Edoardo Maresca and Martin Danelljan and Ming-Hsuan Yang and Mohamed H. Abdelpakey and Mohamed S. Shehata and Myung Gu Kang and Namhoon Lee and Ning Wang and Ondřej Mik{\vs}{\'i}k and Payman Moallem and Pablo Vicente-Mo{\~n}ivar and Pedro Senna and Peixia Li and Philip H. S. Torr and Priya Mariam Raju and Ruihe Qian and Qiang Wang and Qin Zhou and Qing Guo and Rafael Martin Nieto and Rama Krishna Sai Subrahmanyam Gorthi and Ran Tao and R. Bowden and Richard M. Everson and Runling Wang and Sangdoo Yun and Seokeon Choi and Sergio Vivas and Shuai Bai and Shuangping Huang and Sihang Wu and Simon Hadfield and Siwen Wang and Stuart Golodetz and Ming Tang and Tianyang Xu and Tianzhu Zhang and Tobias Fischer and Vincenzo Santopietro and Vitomir {\vS}truc and Wei Wang and Wangmeng Zuo and Wei Feng and Wei Wu and Wei Zou and Weiming Hu and Wen-gang Zhou and Wen Jun Zeng and Xiaofan Zhang and Xiaohe Wu and Xiaojun Wu and Xinmei Tian and Yan Li and Yan Lu and Yee Wei Law and Yi Wu and Y. Demiris and Yicai Yang and Yifan Jiao and Yuhong Li and Yunhua Zhang and Yuxuan Sun and Zheng Zhang and Zhengyu Zhu and Zhenhua Feng and Zhihui Wang and Zhiqun He},
  booktitle={ECCV Workshops},
The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided… 

The Eighth Visual Object Tracking VOT2020 Challenge Results

A significant novelty is introduction of a new VOT short-term tracking evaluation methodology, and introduction of segmentation ground truth in the VOT-ST2020 challenge – bounding boxes will no longer be used in theVDT challenges.

Long-term Visual Tracking: Review and Experimental Comparison

This paper provides a thorough review of long-term tracking, summarizing long- term tracking algorithms from two perspectives: framework architectures and utilization of intermediate tracking results, and discusses the future prospects from multiple perspectives.

Local to global Tracker: A Siamese Network for Long-term Tracking

A Siamese network for single object tracking task that consists of two branches, one of which is the classification branch used to predict positive or negative samples, and the other is the regression branchused to predict the specific location of the object in sequence is introduced.

Deep Bidirectional Correlation Filters for Visual Object Tracking

This work proposes a novel algorithm based on bidirectional DCFs for VOT that realizes a highly accurate DCFs because forward and backward tracking information are fused together for consistent VOT.

Predictive Visual Tracking: A New Benchmark and Baseline Approach

A new predictive visual tracking baseline is developed to compensate for the latency stemming from the onboard computation and can provide a more realistic evaluation of the trackers for the robotic applications.

Fast and Robust Visual Tracking with Few-Iteration Meta-Learning

This work proposes a meta-learning method based on fast optimization for visual object tracking that performs well on VOT2018 and GOT-10k datasets, and is fast and robust on real-time performance.

Single Object Tracking Research: A Survey

This paper first review the two most popular tracking frameworks in the past ten years, i.e., the Correlation Filter (CF) and Siamese network based visual object tracking and presents the rationale, the improvement strategy, and the representative works of the above two frameworks in detail.

Extending Visual Object Tracking for Long Time Horizons

A novel fully convolutional anchor free siamese framework for visual object tracking is presented and a novel metric for long-term tracking which captures the ability of a tracker to track consistently for long duration is proposed.

Hard Occlusions in Visual Object Tracking

It is observed that tracker performance varies wildly between different categories of hard occlusions, where a top-performing tracker on one category performs significantly worse on a different category, suggesting that the common tracker rankings using averaged single performance scores are not adequate to gauge tracker performance in real-world scenarios.

Visual Object Tracking with Discriminative Filters and Siamese Networks: A Survey and Outlook

This survey presents a systematic and thorough review of more than 90 DCFs and Siamese trackers, based on results in nine tracking benchmarks, and distinguishes and comprehensively review the shared as well as specific open research challenges in both these tracking paradigms.



The Visual Object Tracking VOT2017 Challenge Results

The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative. Results of 51 trackers are presented; many are state-of-the-art

The Visual Object Tracking VOT2016 Challenge Results

The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.

The Visual Object Tracking VOT2015 Challenge Results

The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance and presents a new VOT 2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute.

Long-Term Visual Object Tracking Benchmark

Existing short sequence benchmarks fail to bring out the inherent differences in tracking algorithms which widen up while tracking on long sequences and the accuracy of trackers abruptly drops on challenging long sequences, suggesting the potential need of research efforts in the direction of long-term tracking.

The Thermal Infrared Visual Object Tracking VOT-TIR2015 Challenge Results

The Thermal Infrared Visual Object Tracking challenge 2015, VOT-TIR2015, aims at comparing short-term single-object visual trackers that work on thermal infrared (TIR) sequences and do not apply

A Novel Performance Evaluation Methodology for Single-Target Trackers

The requirements are the basis of a new evaluation methodology that aims at a simple and easily interpretable tracker comparison and a fully-annotated dataset with per-frame annotations with several visual attributes, which is the largest benchmark to date.

Long-term Tracking in the Wild: A Benchmark

The OxUvA dataset and benchmark for evaluating single-object tracking algorithms is introduced, offering the community a large and diverse benchmark to enable the design and evaluation of tracking methods ready to be used “in the wild”.

Long-Term Tracking through Failure Cases

A visual tracking algorithm, robust to many of the difficulties which often occur in real-world scenes, and addressing long-term stability, enabling the tracker to recover from drift and to provide redetection following object disappearance or occlusion is proposed.

TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild

This work presents TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild, which covers a wide selection of object classes in broad and diverse context and provides an extensive benchmark on TrackingNet by evaluating more than 20 trackers.

Object Tracking Benchmark

An extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria is carried out to identify effective approaches for robust tracking and provide potential future research directions in this field.