• Corpus ID: 59336116

See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification

@article{Hu2019SeeBB,
  title={See Better Before Looking Closer: Weakly Supervised Data Augmentation Network for Fine-Grained Visual Classification},
  author={Tao Hu and Honggang Qi},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.09891}
}
  • Tao Hu, H. Qi
  • Published 26 January 2019
  • Computer Science
  • ArXiv
Data augmentation is usually adopted to increase the amount of training data, prevent overfitting and improve the performance of deep models. [] Key Method Specifically, for each training image, we first generate attention maps to represent the object's discriminative parts by weakly supervised learning. Next, we augment the image guided by these attention maps, including attention cropping and attention dropping. The proposed WS-DAN improves the classification accuracy in two folds. In the first stage…

Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization

TLDR
Experimental studies show that transfer learning from large scale datasets can be utilized effectively with visual attention based data augmentation, which can obtain state-of-the-art results on several FGVC datasets.

Improving Fine-Grained Visual Recognition in Low Data Regimes via Self-Boosting Attention Mechanism

TLDR
The self-boosting attention mechanism is proposed, a novel method for regularizing the network to focus on the key regions shared across samples and classes to significantly improve fine-grained visual recognition performance on low data regimes and can be incorporated into existing network architectures.

Attention-Guided CutMix Data Augmentation Network for Fine-Grained Bird Recognition

TLDR
The proposed Attention-Guided CutMix Data Augmentation Network (AGCN) effectively improves the classification performance of the network and AGCN achieves excellent performance on the challenging dataset, CUB Birds.

Semantic feature augmentation for fine-grained visual categorization with few-sample training

TLDR
This paper proposes two novel feature augmentation approaches, Semantic Gate Feature Augmentation (SGFA) and Semantic Boundary FeatureAugmentation (SBFA), which are proposed to reduce the overfitting of small data by adding random noise to different regions of the image's feature maps through a gating mechanism.

Fine-Grained Visual Classification using Self Assessment Classifier

TLDR
A Self Assessment Classifier, which simultaneously leverages the representation of the image and top-k prediction classes to reassess the classi-cation results, and achieves new state-of-the-art results on CUB200-2011, Stanford Dog, and FGVC Aircraft datasets.

Fine-grained image classification algorithm based on Attention Self-supervision

TLDR
A fine-grained classification algorithm based on Attention Self-supervision (ASS) was proposed based on ResNet-152 pre-trained model to extract the global image features and forecast the attention maps with Depthwise Separable convolution (DS conv).

Contrastively-reinforced Attention Convolutional Neural Network for Fine-grained Image Recognition

TLDR
Comparively-reinforced Attention Convolutional Neural Network (CRA-CNN), which reinforces the attention awareness of deep activations and is comparable with state-of-art studies despite its simplicity.

A Two-Stage Approach for Fine-Grained Visual Recognition via Confidence Ranking and Fusion

TLDR
This work proposes a two-stage approach that can fuse the original features and partial features by evaluating and ranking the information of partial images and achieves excellent performance on two benchmark datasets, which demonstrates its effectiveness.

Fine-grained classification based on multi-scale pyramid convolution networks

TLDR
A weakly supervised fine-grained classification network based on multi-scale pyramid based on pyramid convolution kernel to replace ordinary convolution Kernel in residual network, which can expand the receptive field of the convolution kernels and use complementary information of different scales.

S2SiamFC: Self-supervised Fully Convolutional Siamese Network for Visual Tracking

TLDR
This work proposes a novel self-supervised framework for visual tracking which can easily adapt the state-of-the-art supervised Siamese-based trackers into unsupervised ones by utilizing the fact that an image and any cropped region of it can form a natural pair for self-training.
...

References

SHOWING 1-10 OF 47 REFERENCES

The application of two-level attention models in deep convolutional neural network for fine-grained image classification

TLDR
This paper proposes to apply visual attention to fine-grained classification task using deep neural network and achieves the best accuracy under the weakest supervision condition, and is competitive against other methods that rely on additional annotations.

Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-Grained Image Recognition

TLDR
A novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way and achieves the best performance in three fine-grained tasks.

Fully Convolutional Attention Localization Networks: Efficient Attention Localization for Fine-Grained Recognition

TLDR
It is shown that zooming in on the selected attention regions significantly improves the performance of fine-grained recognition, and the proposed approach is noticeably more computationally efficient during both training and testing because of its fully-convolutional architecture.

Multi-Attention Multi-Class Constraint for Fine-grained Image Recognition

TLDR
A novel attention-based convolutional neural network (CNN) which regulates multiple object parts among different input images, which can be easily trained end-to-end, and is highly efficient which requires only one training stage.

Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition

TLDR
This work shows that mid-level representation learning can be enhanced within the CNN framework, by learning a bank of convolutional filters that capture class-specific discriminative patches without extra part or bounding box annotations.

Learning Deep Features for Discriminative Localization

In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability

Neural Activation Constellations: Unsupervised Part Model Discovery with Convolutional Networks

TLDR
An approach is presented that is able to learn part models in a completely unsupervised manner, without part annotations and even without given bounding boxes during learning, to find constellations of neural activation patterns computed using convolutional neural networks.

Jointly Optimize Data Augmentation and Network Training: Adversarial Data Augmentation in Human Pose Estimation

TLDR
The key idea is to design a generator that competes against a discriminator that explores weaknesses of the discriminators, while the discriminator learns from hard augmentations to achieve better performance.

Pairwise Confusion for Fine-Grained Visual Classification

TLDR
This work addresses overfitting in end-to-end neural network training on FGVC tasks using a novel optimization procedure, called Pairwise Confusion (PC), which reduces overfitting by intentionally introducing confusion in the activations.

Learning Multi-attention Convolutional Neural Network for Fine-Grained Image Recognition

TLDR
This paper proposes a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other, and shows the best performances on three challenging published fine-grained datasets.