Multi-Camera Multiple 3D Object Tracking on the Move for Autonomous Vehicles

@article{Nguyen2022MultiCameraM3,
  title={Multi-Camera Multiple 3D Object Tracking on the Move for Autonomous Vehicles},
  author={Pha Nguyen and Kha Gia Quach and Chi Nhan Duong and Ngan T. H. Le and Xuan-Bac Nguyen and Khoa Luu},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2022},
  pages={2568-2577}
}
  • Pha Nguyen, Kha Gia Quach, Khoa Luu
  • Published 19 April 2022
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
The development of autonomous vehicles provides an opportunity to have a complete set of camera sensors capturing the environment around the car. Thus, it is important for object detection and tracking to address new challenges, such as achieving consistent results across views of cameras. To address these challenges, this work presents a new Global Association Graph Model with Link Prediction approach to predict existing tracklets location and link detections with tracklets via cross-attention… 

Figures and Tables from this paper

Depth Perspective-aware Multiple Object Tracking

A new real-time Depth Perspective-aware Multiple Object Tracking (DP-MOT) approach to tackle the occlusion problem in MOT, which consistently achieves state-of-the-art performance compared to recent MOT methods on standard MOT benchmarks.

References

SHOWING 1-10 OF 47 REFERENCES

Monocular Quasi-Dense 3D Object Tracking

This work proposes a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform and an LSTM-based object velocity learning module aggregates the long-term trajectory information for more accurate motion extrapolation.

Joint Monocular 3D Vehicle Detection and Tracking

A novel online framework for 3D vehicle detection and tracking from monocular videos that can not only associate detections of vehicles in motion over time, but also estimate their complete 3D bounding box information from a sequence of 2D images captured on a moving platform.

DyGLIP: A Dynamic Graph Model with Link Prediction for Accurate Multi-Camera Multiple Object Tracking

A new Dynamic Graph Model with Link Prediction (DyGLIP) approach 1 is proposed to solve the data association task in Multi-Camera Multiple Object Tracking and offers several advantages, including better feature representations and the ability to recover from lost tracks during camera transitions.

Probabilistic 3D Multi-Object Tracking for Autonomous Driving

This paper presents the on-line tracking method, which made the first place in the NuScenes Tracking Challenge, and outperforms the AB3DMOT baseline method by a large margin in the Average Multi-Object Tracking Accuracy (AMOTA) metric.

Tracking Objects as Points

Tracking has traditionally been the art of following interest points through space and time. This changed with the rise of powerful deep networks. Nowadays, tracking is dominated by pipelines that

Online Multi-object Tracking Using CNN-Based Single Object Tracker with Spatial-Temporal Attention Mechanism

A CNN-based framework for online MOT that utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame and introduces spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets.

AB3DMOT: A Baseline for 3D Multi-Object Tracking and New Evaluation Metrics

This work proposes a new 3D MOT evaluation tool along with three new metrics to comprehensively evaluate3D MOT methods, and shows that, the proposed method achieves strong 2D MOT performance on KITTI and runs at a rate of $207.4$ FPS, achieving the fastest speed among modern 3D Mot systems.

GNN3DMOT: Graph Neural Network for 3D Multi-Object Tracking With 2D-3D Multi-Feature Learning

This work proposes two techniques to improve the discriminative feature learning for MOT by introducing a novel feature interaction mechanism by introducing the Graph Neural Network and proposes a novel joint feature extractor to learn appearance and motion features from 2D and 3D space simultaneously.

Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics

This work introduces two intuitive and general metrics to allow for objective comparison of tracker characteristics, focusing on their precision in estimating object locations, their accuracy in recognizing object configurations and their ability to consistently label objects over time.

Argoverse: 3D Tracking and Forecasting With Rich Maps

Argoverse includes sensor data collected by a fleet of autonomous vehicles in Pittsburgh and Miami as well as 3D tracking annotations, 300k extracted interesting vehicle trajectories, and rich semantic maps, which contain rich geometric and semantic metadata which are not currently available in any public dataset.