VehicleNet: Learning Robust Visual Representation for Vehicle Re-Identification

  title={VehicleNet: Learning Robust Visual Representation for Vehicle Re-Identification},
  author={Zhedong Zheng and Tao Ruan and Yunchao Wei and Yi Yang and Tao Mei},
  journal={IEEE Transactions on Multimedia},
One fundamental challenge of vehicle re-identification (re-id) is to learn robust and discriminative visual representation, given the significant intra-class vehicle variations across different camera views. As the existing vehicle datasets are limited in terms of training images and viewpoints, we propose to build a unique large-scale vehicle dataset (called VehicleNet) by harnessing four public vehicle datasets, and design a simple yet effective two-stage progressive approach to learning more… 

Cross-Domain Evaluation for Vehicle Re-Identification

This work conducted cross-domain evaluation experiments and ablation studies on two large benchmark datasets VehicleID and VeRi-776 to demonstrate that the approach outperforms the state-of-the-art by more than 3.05% and 3.13% improvements in term of mAP, respectively.

Model Latent Views With Multi-Center Metric Learning for Vehicle Re-Identification

A multi-center metric learning framework for multi-view vehicle Re-ID that model latent views from vehicle visual appearance directly without any extra labels except ID is proposed and shows the superiority of the proposed framework in contrast to a series of existing state-of-the-arts benchmarks.

Vehicle Re-Identification based on Ensembling Deep Learning Features including a Synthetic Training Dataset, Orientation and Background Features, and Camera Verification.

Vehicle re-identification has the objective of finding a specific vehicle among different vehicle crops captured by multiple cameras placed at multiple intersections, and background and orientation similarity matrices are added to the system to reduce bias towards these characteristics.

Discriminative-Region Attention and Orthogonal-View Generation Model for Vehicle Re-Identification

A Discriminative-Region Attention and Orthogonal-View Generation (DRA-OVG) model, which only requires identity (ID) labels to conquer the multiple challenges of vehicle Re-ID, and achieves remarkable improvements over the state-of-the-art vehicle re-identification methods on VehicleID and VeRi-776 datasets.

Seeing Crucial Parts: Vehicle Model Verification via a Discriminative Representation Model

This article introduces a simple yet powerful deep model—the enforced intra-class alignment network (EIA-Net)—which can learn a more discriminative image representation by localizing key vehicle parts and jointly incorporating two distance metrics: vehicle-level embedding and vehicle-part-sensitive embedding.

Robust Vehicle Re-identification via Rigid Structure Prior

This paper focuses on developing a robust part-aware structure-based vehicle re-id system against the massive appearance changes due to the pose and illumination variants, and applies the strong convolutional neural networks to extract the visual representation, which is based on the detected vehicle images.

Self-Supervised Visual Attention Learning for Vehicle Re-Identification

The state-of-the-art (SOTA) performance of the self-supervised learning to regularize visual attention learning is demonstrated with the capability of capturing informative vehicle parts with no corresponding manual labels.

A Strong Baseline for Vehicle Re-Identification

This paper analyzes the main factors hindering the Vehicle Re-ID performance, and presents solutions, specifically targeting the dataset Track 2 of the 5th AI City Challenge, including reducing the domain gap between real and synthetic data, and adaptive loss weight adjustment.

Vehicle Re-Identification with Spatio-Temporal Model Leveraging by Pose View Embedding

A two-branch framework for vehicle Re-ID is designed, including a Keypoint-based Pose Embedding Visual (KPEV) model and a Key pointed Pose-Guided Spatio-Temporal (KPGST) model, which is integrated into the framework and the results of KPEV and KPGST are fused based on a Bayesian network.

Viewpoint robust knowledge distillation for accelerating vehicle re-identification

Experiments show that the proposed VRKD method outperforms many state-of-the-art vehicle re-identification approaches with better accurate and speed performance.



A Deep Learning-Based Approach to Progressive Vehicle Re-identification for Urban Surveillance

This paper proposes a novel deep learning-based approach to PROgressive Vehicle re-ID, called “PROVID”, which treats vehicle Re-Id as two specific progressive search processes: coarse-to-fine search in the feature space, and near- to-distantsearch in the real world surveillance environment.

Cross-View GAN Based Vehicle Generation for Re-identification

This work proposes a new deep architecture, called Cross-View Generative Adversarial Network (XVGAN), to learn the features of vehicle images captured by cameras with disjoint views, and take the features as conditional variables to effectively infer cross-view images.

Part-Regularized Near-Duplicate Vehicle Re-Identification

This paper proposes a simple but efficient part-regularized discriminative feature preserving method which enhances the perceptive ability of subtle discrepancies in vehicle re-identification and develops a novel framework to integrate part constrains with the global Re-ID modules by introducing an detection branch.

Deep Relative Distance Learning: Tell the Difference between Similar Vehicles

A Deep Relative Distance Learning (DRDL) method is proposed which exploits a two-branch deep convolutional network to project raw vehicle images into an Euclidean space where distance can be directly used to measure the similarity of arbitrary two vehicles.

PAMTRI: Pose-Aware Multi-Task Learning for Vehicle Re-Identification Using Highly Randomized Synthetic Data

A Pose-Aware Multi-Task Re-Identification (PAMTRI) framework that overcomes viewpoint-dependency by explicitly reasoning about vehicle pose and shape via keypoints, heatmaps and segments from pose estimation and achieves significant improvement over state-of-the-art on two mainstream vehicle ReID benchmarks.

Improving triplet-wise training of convolutional neural network for vehicle re-identification

The experimental results demonstrate the effectiveness of the proposed methods that achieve superior performance than the state-of-the-arts on two vehicle re-id datasets, which are derived from real-world urban surveillance videos.

Group-Sensitive Triplet Embedding for Vehicle Reidentification

A deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation “group” between samples and each individual vehicle in the triplet network learning.

Learning Deep Neural Networks for Vehicle Re-ID with Visual-spatio-Temporal Path Proposals

A two-stage framework that incorporates complex spatio-temporal information for effectively regularizing the re-identification results is proposed and extensive experiments and analysis show the effectiveness of the proposed method and individual components.

A Dual-Path Model With Adaptive Attention for Vehicle Re-Identification

A novel dual-path adaptive attention model for vehicle re-identification (AAVER) that captures macroscopic vehicle features while the orientation conditioned part appearance path learns to capture localized discriminative features by focusing attention on the most informative key-points.

Multi-View Vehicle Re-Identification using Temporal Attention Model and Metadata Re-ranking

This paper proposes a viewpoint-aware temporal attention model for vehicle ReID utilizing deep learning features extracted from consecutive frames with vehicle orientation and metadata attributes being taken into consideration and achieves mAP of 79.17% with the second place ranking in the competition.