Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning

@article{Cui2018LargeSF,
  title={Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning},
  author={Yin Cui and Yang Song and Chen Sun and Andrew G. Howard and Serge J. Belongie},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={4109-4118}
}
Transferring the knowledge learned from large scale datasets (e.g., ImageNet) via fine-tuning offers an effective solution for domain-specific fine-grained visual categorization (FGVC) tasks (e.g., recognizing bird species or car make & model). In such scenarios, data annotation often calls for specialized domain knowledge and thus is difficult to scale. In this work, we first tackle a problem in large scale FGVC. Our method won first place in iNaturalist 2017 large scale species classification… 

Effective Domain Knowledge Transfer with Soft Fine-tuning

TLDR
This paper firstly introduces the concept of general discrimination to describe ability of a network to distinguish untrained patterns, and experimentally demonstrates that general discrimination could potentially enhance the total discrimination ability on target domain.

Progressive Adversarial Networks for Fine-Grained Domain Adaptation

TLDR
The Progressive Adversarial Networks (PAN) is presented to align fine-grained categories across domains with a curriculum-based adversarial learning framework, and it outperforms the state-of-the-art domain adaptation methods.

Domain Adaptive Transfer Learning on Visual Attention Aware Data Augmentation for Fine-grained Visual Categorization

TLDR
Experimental studies show that transfer learning from large scale datasets can be utilized effectively with visual attention based data augmentation, which can obtain state-of-the-art results on several FGVC datasets.

Bridging the Web Data and Fine-Grained Visual Recognition via Alleviating Label Noise and Domain Mismatch

TLDR
This work mainly focuses on two critical issues including "label noise" and "domain mismatch" in the web images, and proposes an end-to-end deep denoising network (DDN) model to jointly solve these problems in the process of web images selection.

Transferring Pretrained Networks to Small Data via Category Decorrelation

TLDR
A novel regularization approach, Category Decorrelation (CatDec), is proposed to minimize category correlation in the model, which introduces a new inductive bias to strengthen the model transfer.

Fine-Grained Image Analysis with Deep Learning: A Survey

TLDR
A systematic survey of recent advances in deep learning powered FGIA is presented, where it attempts to re-define and broaden the field of FGIA by consolidating two fundamental fine-grained research areas -- fine- grained image recognition and fine-Grained image retrieval.

Web-Supervised Network for Fine-Grained Visual Classification

TLDR
This paper proposes a simple yet effective approach to deal with noisy images and hard examples during training for FGVC, and demonstrates that this approach is much superior to the state-of-the-art web-supervised methods.

Two-Stage Fine-Tuning: A Novel Strategy for Learning Class-Imbalanced Data

TLDR
A two-stage fine-tuning is proposed: first fine-tune the final layer of the pretrained model with class-balanced reweighting loss, and then the standard fine- Tuning is performed, which allows the model to learn an initial representation of the specific task.

Improving Fine-Grained Visual Recognition in Low Data Regimes via Self-Boosting Attention Mechanism

TLDR
The self-boosting attention mechanism is proposed, a novel method for regularizing the network to focus on the key regions shared across samples and classes to significantly improve fine-grained visual recognition performance on low data regimes and can be incorporated into existing network architectures.

Understanding Cross-Domain Few-Shot Learning: An Experimental Study

TLDR
This paper empirically investigate scenarios under which it is advantageous to use each pre-training scheme, based on domain similarity and few-shot difficulty: performance gain of self-supervised pretraining over supervised pre- training increases when domain similarity is smaller or few- shot difficulty is lower.
...

References

SHOWING 1-10 OF 71 REFERENCES

Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation

TLDR
A new semi-supervised method for learning via web data that has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, the method also utilizes detailed annotations including object bounding boxes and part landmarks.

Fine-Grained Categorization and Dataset Bootstrapping Using Deep Metric Learning with Humans in the Loop

TLDR
Experimental evaluations show significant performance gain using dataset bootstrapping and demonstrate state-of-the-art results achieved by the proposed deep metric learning methods.

Fine-Grained Recognition in the Wild: A Multi-task Domain Adaptation Approach

TLDR
This work uses an attribute based multi-task adaptation loss to increase accuracy from a baseline of 4:2% to 19:1% in the semi-supervised adaptation case and study fine-grained domain adaptation as a step towards overcoming the dataset shift between easily acquired annotated images and the real world.

The application of two-level attention models in deep convolutional neural network for fine-grained image classification

TLDR
This paper proposes to apply visual attention to fine-grained classification task using deep neural network and achieves the best accuracy under the weakest supervision condition, and is competitive against other methods that rely on additional annotations.

Fine-Grained Image Classification via Combining Vision and Language

  • Xiangteng HeYuxin Peng
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
The two-stream model combing vision and language (CVL) for learning latent semantic representations is proposed, which demonstrates the CVL approach achieves the best performance on the widely used CUB-200-2011 dataset.

Part-Based R-CNNs for Fine-Grained Category Detection

TLDR
This work proposes a model for fine-grained categorization that overcomes limitations by leveraging deep convolutional features computed on bottom-up region proposals, and learns whole-object and part detectors, enforces learned geometric constraints between them, and predicts a fine- grained category from a pose-normalized representation.

Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-Grained Image Recognition

TLDR
A novel recurrent attention convolutional neural network (RA-CNN) which recursively learns discriminative region attention and region-based feature representation at multiple scales in a mutual reinforced way and achieves the best performance in three fine-grained tasks.

Learning Multi-attention Convolutional Neural Network for Fine-Grained Image Recognition

TLDR
This paper proposes a novel part learning approach by a multi-attention convolutional neural network (MA-CNN), where part generation and feature learning can reinforce each other, and shows the best performances on three challenging published fine-grained datasets.

Fine-grained pose prediction, normalization, and recognition

TLDR
This work unifies steps in an end-to-end trainable network supervised by keypoint locations and class labels that localizes parts by a fully convolutional network to focus the learning of feature representations for the fine-grained classification task.

The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition

TLDR
This work introduces an alternative approach, leveraging free, noisy data from the web and simple, generic methods of recognition, and demonstrates its efficacy on four fine-grained datasets, greatly exceeding existing state of the art without the manual collection of even a single label.
...