• Corpus ID: 211677394

Cross-Spectrum Dual-Subspace Pairing for RGB-infrared Cross-Modality Person Re-Identification

@article{Fan2020CrossSpectrumDP,
  title={Cross-Spectrum Dual-Subspace Pairing for RGB-infrared Cross-Modality Person Re-Identification},
  author={Xing Fan and Hao Luo and Chi Zhang and Wei Jiang},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.00213}
}
Due to its potential wide applications in video surveillance and other computer vision tasks like tracking, person re-identification (ReID) has become popular and been widely investigated. However, conventional person re-identification can only handle RGB color images, which will fail at dark conditions. Thus RGB-infrared ReID (also known as Infrared-Visible ReID or Visible-Thermal ReID) is proposed. Apart from appearance discrepancy in traditional ReID caused by illumination, pose variations… 
4 Citations
Multi-complement feature network for infrared-visible cross-modality person re-identification
TLDR
This work proposes an end-to-end model, multi-complement feature network (MFN), to complement common features with single-modality features and achieves state-of-the-art performance on IV-ReID and RegDB datasets.
Leaning Compact and Representative Features for Cross-Modality Person Re-Identification
TLDR
This paper proposes a new loss function called Enumerate Angular Triplet (EAT) loss, motivated by the knowledge distillation, to narrow down the features between different modalities before feature embedding, and presents a new CrossModality Knowledge Distillation (CMKD) loss.
Parameter Sharing Exploration and Hetero-Center Triplet Loss for Visible-Thermal Person Re-Identification
TLDR
How many parameters a two-stream network should share is explored, which is still not well investigated in the existing literature, and the hetero-center triplet loss is proposed to relax the strict constraint of traditional triplet Loss by replacing the comparison of the anchor to all the other samples by the anchor center to allThe other centers.
SFANet: A Spectrum-aware Feature Augmentation Network for Visible-Infrared Person Re-Identification
TLDR
A novel spectrum-aware feature augementation network named SFANet is formulated and put forward to employ grayscale-spectrum images to fully replace RGB images for feature learning, which can apparently reduce modality discrepancy and detect inner structure relations across the different modalities, making it robust to color variations.

References

SHOWING 1-10 OF 57 REFERENCES
RGB-Infrared Cross-Modality Person Re-identification
TLDR
The experiments show that RGB-IR cross-modality matching is very challenging but still feasible using the proposed model with deep zero-padding, giving the best performance.
Hierarchical Discriminative Learning for Visible Thermal Person Re-Identification
TLDR
An improved two-stream CNN network is presented to learn the multimodality sharable feature representations and identity loss and contrastive loss are integrated to enhance the discriminability and modality-invariance with partially shared layer parameters.
Visible Thermal Person Re-Identification via Dual-Constrained Top-Ranking
TLDR
A dual-path network with a novel bi-directional dual-constrained top-ranking loss to learn discriminative feature representations and identity loss is further incorporated to model the identity-specific information to handle large intra-class variations.
Learning to Reduce Dual-Level Discrepancy for Infrared-Visible Person Re-Identification
TLDR
A novel Dual-level Discrepancy Reduction Learning (D$^2$RL) scheme which handles the two discrepancies separately in infrared-Visible person RE-IDentification and outperforms the state-of-the-art methods.
Cross-Modality Person Re-Identification with Generative Adversarial Training
TLDR
This paper proposes a novel cross-modality generative adversarial network (termed cmGAN) that integrates both identification loss and cross- modality triplet loss, which minimize inter-class ambiguity while maximizing cross-Modality similarity among instances.
HSME: Hypersphere Manifold Embedding for Visible Thermal Person Re-Identification
TLDR
This paper proposes an end-to-end dualstream hypersphere manifold embedding network (HSMEnet) with both classification and identification constraint and designs a two-stage training scheme to acquire decorrelated features.
Zero-Shot Person Re-identification via Cross-View Consistency
TLDR
This paper proposes a data-driven distance metric (DDDM) method, re-exploiting the training data to adjust the metric for each query-gallery pair, with a significant improvement over three baseline metric learning methods.
Person Re-identification by Multi-Channel Parts-Based CNN with Improved Triplet Loss Function
TLDR
A novel multi-channel parts-based convolutional neural network model under the triplet framework for person re-identification that significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based ones, on the challenging i-LIDS, VIPeR, PRID2011 and CUHK01 datasets.
Spindle Net: Person Re-identification with Human Body Region Guided Feature Decomposition and Fusion
TLDR
This study proposes a novel Convolutional Neural Network, called Spindle Net, based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion, which is the first time human body structure information is considered in a CNN framework to facilitate feature learning.
Mask-Guided Contrastive Attention Model for Person Re-identification
TLDR
This paper introduces the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, then designs a mask-guided contrastive attention model (MGCAM) to learn features separately from the body and background regions, and proposes a novel region-level triplet loss to restrain the features learnt from different regions.
...
1
2
3
4
5
...