A Similarity Inference Metric for RGB-Infrared Cross-Modality Person Re-identification

@inproceedings{Jia2020ASI,
  title={A Similarity Inference Metric for RGB-Infrared Cross-Modality Person Re-identification},
  author={Mengxi Jia and Yunpeng Zhai and Shijian Lu and Siwei Ma and Jiayuan Zhang},
  booktitle={IJCAI},
  year={2020}
}
RGB-Infrared (IR) cross-modality person re-identification (re-ID), which aims to search an IR image in RGB gallery or vice versa, is a challenging task due to the large discrepancy between IR and RGB modalities. Existing methods address this challenge typically by aligning feature distributions or image styles across modalities, whereas the very useful similarities among gallery samples of the same modality (i.e. intra-modality sample similarities) is largely neglected. This paper presents a… 
Discover Cross-Modality Nuances for Visible-Infrared Person Re-Identification
Visible-infrared person re-identification (Re-ID) aims to match the pedestrian images of the same identity from different modalities. Existing works mainly focus on alleviating the modality
Logit and Feature Dual-level Alignment for Visible-Infrared Person Re-Identification
Person re-identification (Re-ID) is an important task in video surveillance which automatically searches and identifies people across different cameras. Despite the extensive Re-ID progress in
Syncretic Modality Collaborative Learning for Visible Infrared Person Re-Identification
Visible infrared person re-identification (VI-REID) aims to match pedestrian images between the daytime visible and nighttime infrared camera views. The large cross-modality discrepancies have become
MSO: Multi-Feature Space Joint Optimization Network for RGB-Infrared Person Re-Identification
TLDR
This work proposes an edge features enhancement module to enhance the modality-sharable features in each single-modality space and designs a perceptual edge features (PEF) loss after the edge fusion strategy analysis, which markedly improves the network's performance.
Cross-Modality Person Re-Identification via Modality Confusion and Center Aggregation
Cross-modality person re-identification is a challenging task due to large cross-modality discrepancy and intramodality variations. Currently, most existing methods focus on learning
Modality-aware Style Adaptation for RGB-Infrared Person Re-Identification
TLDR
A highly compact modality-aware style adaptation (MSA) framework, which aims to explore more potential relations between RGB and IR modalities by introducing new related modalities and design two image-level losses based on the quantified results to guide the style adaptation during an end-to-end four-modality collaborative learning process.
A Multi-Constraint Similarity Learning with Adaptive Weighting for Visible-Thermal Person Re-Identification
TLDR
A Multi-Constraint (MC) similarity learning method that jointly considers the crossmodality relationships from three different aspects, i.e., Instance-to-Instance (I2I), Center-to -Instance (C2I, and Center- to-Center (C1C), is proposed.
Deep High-Resolution Representation Learning for Cross-Resolution Person Re-Identification
TLDR
A Deep High-Resolution Pseudo-Siamese Framework (PS-HRNet) is proposed to solve the problem of matching person images with the same identity from different cameras, and a pseudo-siamese framework is developed to reduce the difference of feature distributions between low- resolution images and high-resolution images.
Matching on Sets: Conquer Occluded Person Re-identification Without Alignment
TLDR
MoS encodes a person image by a pattern set as represented by a ‘global vector’ with each element capturing one specific visual pattern, and it introduces Jaccard distance as a metric to compute the distance between pattern sets and measure image similarity.
...
1
2
...

References

SHOWING 1-10 OF 37 REFERENCES
Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification
TLDR
This paper proposes to generate cross-modality paired-images and perform both global set-level and fine-grained instance-level alignments for RGB-IR Re-ID and demonstrates that the proposed model favourably against state-of-the-art methods.
RGB-Infrared Cross-Modality Person Re-Identification via Joint Pixel and Feature Alignment
TLDR
A novel and end-to-end Alignment Generative Adversarial Network (AlignGAN) for the RGB-IR RE-ID task, which consists of a pixel generator, a feature generator and a joint discriminator that is able to not only alleviate the cross-modality and intra- modality variations, but also learn identity-consistent features.
RGB-Infrared Cross-Modality Person Re-identification
TLDR
The experiments show that RGB-IR cross-modality matching is very challenging but still feasible using the proposed model with deep zero-padding, giving the best performance.
Visible Thermal Person Re-Identification via Dual-Constrained Top-Ranking
TLDR
A dual-path network with a novel bi-directional dual-constrained top-ranking loss to learn discriminative feature representations and identity loss is further incorporated to model the identity-specific information to handle large intra-class variations.
Hierarchical Discriminative Learning for Visible Thermal Person Re-Identification
TLDR
An improved two-stream CNN network is presented to learn the multimodality sharable feature representations and identity loss and contrastive loss are integrated to enhance the discriminability and modality-invariance with partially shared layer parameters.
Cross-Modality Person Re-Identification with Generative Adversarial Training
TLDR
This paper proposes a novel cross-modality generative adversarial network (termed cmGAN) that integrates both identification loss and cross- modality triplet loss, which minimize inter-class ambiguity while maximizing cross-Modality similarity among instances.
Learning to Reduce Dual-Level Discrepancy for Infrared-Visible Person Re-Identification
TLDR
A novel Dual-level Discrepancy Reduction Learning (D$^2$RL) scheme which handles the two discrepancies separately in infrared-Visible person RE-IDentification and outperforms the state-of-the-art methods.
Cross-Domain Visual Matching via Generalized Similarity Measure and Feature Learning
TLDR
A novel pairwise similarity measure that advances existing models by i) expanding traditional linear projections into affine transformations and ii) fusing affine Mahalanobis distance and Cosine similarity by a data-driven combination is presented.
Harmonious Attention Network for Person Re-identification
  • Wei Li, Xiatian Zhu, S. Gong
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A novel Harmonious Attention CNN (HA-CNN) model is formulated for joint learning of soft pixel attention and hard regional attention along with simultaneous optimisation of feature representations, dedicated to optimise person re-id in uncontrolled (misaligned) images.
HSME: Hypersphere Manifold Embedding for Visible Thermal Person Re-Identification
TLDR
This paper proposes an end-to-end dualstream hypersphere manifold embedding network (HSMEnet) with both classification and identification constraint and designs a two-stage training scheme to acquire decorrelated features.
...
1
2
3
4
...