Need for Speed: A Benchmark for Higher Frame Rate Object Tracking

@article{Galoogahi2017NeedFS,
  title={Need for Speed: A Benchmark for Higher Frame Rate Object Tracking},
  author={Hamed Kiani Galoogahi and Ashton Fagg and Chen Huang and Deva Ramanan and Simon Lucey},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={1134-1143}
}
In this paper, we propose the first higher frame rate video dataset (called Need for Speed - NfS) and benchmark for visual object tracking. [...] Key Result Our dataset and benchmark allows for the first time (to our knowledge) systematic exploration of such issues, and will be made available to allow for further research in this space.Expand
LaSOT: A High-Quality Benchmark for Large-Scale Single Object Tracking
TLDR
LaSOT is presented, a high-quality benchmark for Large-scale Single Object Tracking that consists of 1,400 sequences with more than 3.5M frames in total, and is the largest, to the best of the authors' knowledge, densely annotated tracking benchmark. Expand
Long-Term Visual Object Tracking Benchmark
TLDR
Existing short sequence benchmarks fail to bring out the inherent differences in tracking algorithms which widen up while tracking on long sequences and the accuracy of trackers abruptly drops on challenging long sequences, suggesting the potential need of research efforts in the direction of long-term tracking. Expand
The need 4 speed in real-time dense visual tracking
TLDR
This paper proposes a novel combination of hardware and software components that avoids the need to compromise between a dense accurate depth map and a high frame rate, and proposes a machine learning based depth refinement step that is an order of magnitude faster than traditional postprocessing methods. Expand
LaSOT: A High-quality Large-scale Single Object Tracking Benchmark
TLDR
The goal in releasing LaSOT is to provide a dedicated high quality platform for both training and evaluation of trackers, and to take advantage of the close connection between visual appearance and natural language, the largest densely annotated tracking benchmark to be presented. Expand
Is First Person Vision Challenging for Object Tracking?
TLDR
The study extensively analyses the performance of recent visual trackers and baseline FPV trackers with respect to different aspects and considering a new performance measure, and shows that object tracking in FPV is challenging. Expand
Benchmarking Deep Trackers on Aerial Videos
TLDR
The findings indicate that the trackers perform significantly worse in aerial datasets compared to standard ground level videos, and attribute this effect to smaller target size, camera motion, significant camera rotation with respect to the target, out of view movement, and clutter. Expand
Extending Visual Object Tracking for Long Time Horizons
Visual object tracking is a fundamental task in computer vision and is a key component in wide range of applications like surveillance, autonomous navigation, video analysis and editing, augmentedExpand
VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows
  • Xiao Wang, Jianing Li, +6 authors Feng Wu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work proposes a large-scale VisibleEvent benchmark (termed VisEvent) and builds a simple but effective tracking algorithm by proposing a cross-modality transformer, to achieve more effective feature fusion between visible and event data. Expand
TrackingNet: A Large-Scale Dataset and Benchmark for Object Tracking in the Wild
TLDR
This work presents TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild, which covers a wide selection of object classes in broad and diverse context and provides an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Expand
Tracking and frame-rate enhancement for real-time 2D human pose estimation
TLDR
This work proposes a near real-time solution for frame-rate enhancement that enables the use of existing sophisticated pose estimation solutions at elevated frame rates using a multistage system of queues operating in a multi-threaded environment. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Real-Time Camera Tracking: When is High Frame-Rate Best?
TLDR
This work opens up a route to a systematic investigation via the careful synthesis of photorealistic video using ray-tracing of a detailed 3D scene, experimentally obtained photometric response and noise models, and rapid camera motions, based on tens of thousands of hours of CPU rendering time. Expand
Visual Tracking: An Experimental Survey
TLDR
It is demonstrated that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing, and it is found that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. Expand
Convolutional Features for Correlation Filter Based Visual Tracking
TLDR
The results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers, and show that the convolutional features provide improved results compared to standard hand-crafted features. Expand
Learning to Track at 100 FPS with Deep Regression Networks
TLDR
This work proposes a method for offline training of neural networks that can track novel objects at test-time at 100 fps, which is significantly faster than previous methods that use neural networks for tracking, which are typically very slow to run and not practical for real-time applications. Expand
Object Tracking Benchmark
TLDR
An extensive evaluation of the state-of-the-art online object-tracking algorithms with various evaluation criteria is carried out to identify effective approaches for robust tracking and provide potential future research directions in this field. Expand
Learning Background-Aware Correlation Filters for Visual Tracking
TLDR
This work proposes a Background-Aware CF based on hand-crafted features (HOG] that can efficiently model how both the foreground and background of the object varies over time, and superior accuracy and real-time performance of the method compared to the state-of-the-art trackers. Expand
Fully-Convolutional Siamese Networks for Object Tracking
TLDR
A basic tracking algorithm is equipped with a novel fully-convolutional Siamese network trained end-to-end on the ILSVRC15 dataset for object detection in video and achieves state-of-the-art performance in multiple benchmarks. Expand
Adaptive Color Attributes for Real-Time Visual Tracking
TLDR
The contribution of color in a tracking-by-detection framework is investigated and an adaptive low-dimensional variant of color attributes is proposed, suggesting that color attributes provides superior performance for visual tracking. Expand
Encoding color information for visual tracking: Algorithms and benchmark
TLDR
This paper comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers and performs detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. Expand
Siamese Instance Search for Tracking
TLDR
It turns out that the learned matching function is so powerful that a simple tracker built upon it, coined Siamese INstance search Tracker, SINT, suffices to reach state-of-the-art performance. Expand
...
1
2
3
4
5
...