• Corpus ID: 25262907

Learning a Repression Network for Precise Vehicle Search

@article{Xu2017LearningAR,
  title={Learning a Repression Network for Precise Vehicle Search},
  author={Qiantong Xu and Ke Yan and Yonghong Tian},
  journal={ArXiv},
  year={2017},
  volume={abs/1708.02386}
}
The growing explosion in the use of surveillance cameras in public security highlights the importance of vehicle search from large-scale image databases. Precise vehicle search, aiming at finding out all instances for a given query vehicle image, is a challenging task as different vehicles will look very similar to each other if they share same visual attributes. To address this problem, we propose the Repression Network (RepNet), a novel multi-task learning framework, to learn discriminative… 

Figures and Tables from this paper

Two-Level Attention Network With Multi-Grain Ranking Loss for Vehicle Re-Identification

A novel Two-level Attention network supervised by a Multi-grain Ranking loss (TAMR) to learn an efficient feature embedding for the vehicle re-ID task and creatively takes the multi-grain relationship between vehicles into consideration.

Selective deep ensemble for instance retrieval

A Selective Deep Ensemble (SDE) framework to combine various models and features in a complementary way, inspired by the attention mechanism is proposed and it is demonstrated that a large improvement can be acquired with slight increase on computation cost.

Stripe-based and attribute-aware network: a two-branch deep model for vehicle re-identification

A novel two-branch stripe-based and attribute-aware deep convolutional neural network (SAN) is proposed to learn the efficient feature embedding for a vehicle Re-ID task.

Natural Language-Based Vehicle Retrieval with Explicit Cross-Modal Representation Learning

This paper proposes a contrastive cross-modal vehicle retrieval solution, maximizing the value of the complementation between natural language representation and vision representation and achieves MRR score of 33.20%, ranking the 7th place in the AI City Challenge 2022 Track 2.

Group-Group Loss-Based Global-Regional Feature Learning for Vehicle Re-Identification

This work proposes a Group-Group Loss (GGL) to optimize the distance within and across vehicle image groups to accelerate the GRF learning and promote its discrimination power.

SCAN: Spatial and Channel Attention Network for Vehicle Re-Identification

A Spatial and Channel Attention Network (SCAN) based on DCNN is proposed, which contains two branches, i.e., spatial attention branch and channel attention branch, embedded after convolutional layers to refine the feature maps and more discriminative features can be extracted automatically.

Camera Identification Based on Domain Knowledge-Driven Deep Multi-Task Learning

A domain knowledge-driven method for camera identification that consists of one pre-processing module, one feature extractor, and one hierarchical multi-task learning procedure, which is found that the accuracy of the cell-phone device identification can reach 84.3%, which is much higher than that of the camera identification.

Deep learning for fine-grained classification of jujube fruit in the natural environment

A deep convolutional neural network model is proposed for the fine-grained classification of jujube, which exploits a two-stream network to effectively learn discriminative features for each image from both shape level and fine- grained level simultaneously.

Extraction of information and facts from data mining of random sequences for undergraduate research

Community college REU project provides connectedness awareness in the linking of previous published reports, critical thinking in result interpretation, and career development when going onto a senior college REu program, the top three benefits of college education, according to a 2016 July Money Magazine ”Value of College” survey.

Vehicle Reidentification via Multifeature Hypergraph Fusion

The method proposed in this paper uses hypergraph optimization to learn about the similarity between the query image and images in the library, and using the pair and higher-order relationship between query objects and image library, the similarity measurement method is improved compared to direct matching.

References

SHOWING 1-10 OF 19 REFERENCES

Deep Relative Distance Learning: Tell the Difference between Similar Vehicles

A Deep Relative Distance Learning (DRDL) method is proposed which exploits a two-branch deep convolutional network to project raw vehicle images into an Euclidean space where distance can be directly used to measure the similarity of arbitrary two vehicles.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Cross-Domain Image Retrieval with a Dual Attribute-Aware Ranking Network

This work proposes a Dual Attribute-aware Ranking Network (DARN) for retrieval feature learning, consisting of two sub-networks, one for each domain, whose retrieval feature representations are driven by semantic attribute learning.

Embedding Label Structures for Fine-Grained Feature Representation

The proposed multitask learning framework significantly outperforms previous fine-grained feature representations for image retrieval at different levels of relevance and to model the multi-level relevance, label structures such as hierarchy or shared attributes are seamlessly embedded into the framework by generalizing the triplet loss.

Symbiotic Segmentation and Part Localization for Fine-Grained Categorization

The model builds a model of the base-level category that can be fitted to images, producing high-quality foreground segmentation and mid-level part localizations, and improves the categorization accuracy over the state-of-the-art.

FaceNet: A unified embedding for face recognition and clustering

A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.

Learning Face Representation from Scratch

A semi-automatical way to collect face images from Internet is proposed and a large scale dataset containing about 10,000 subjects and 500,000 images, called CASIAWebFace is built, based on which a 11-layer CNN is used to learn discriminative representation and obtain state-of-theart accuracy on LFW and YTF.

Large Scale Online Learning of Image Similarity Through Ranking

OASIS is an online dual approach using the passive-aggressive family of learning algorithms with a large margin criterion and an efficient hinge loss cost, which suggests that query independent similarity could be accurately learned even for large scale data sets that could not be handled before.

Learning a similarity metric discriminatively, with application to face verification

The idea is to learn a function that maps input patterns into a target space such that the L/sub 1/ norm in the target space approximates the "semantic" distance in the input space.

Deep semantic ranking based hashing for multi-label image retrieval

In this work, deep convolutional neural network is incorporated into hash functions to jointly learn feature representations and mappings from them to hash codes, which avoids the limitation of semantic representation power of hand-crafted features.