Corpus ID: 209444394

Measuring Dataset Granularity

@article{Cui2019MeasuringDG,
  title={Measuring Dataset Granularity},
  author={Yin Cui and Zeqi Gu and Dhruv Kumar Mahajan and Laurens van der Maaten and Serge J. Belongie and Ser-Nam Lim},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.10154}
}
Despite the increasing visibility of fine-grained recognition in our field, "fine-grained'' has thus far lacked a precise definition. In this work, building upon clustering theory, we pursue a framework for measuring dataset granularity. We argue that dataset granularity should depend not only on the data samples and their labels, but also on the distance function we choose. We propose an axiomatic framework to capture desired properties for a dataset granularity measure and provide examples of… Expand
Grafit: Learning fine-grained image representations with coarse labels
TLDR
This paper tackles the problem of learning a finer representation than the one provided by training labels, which enables fine-grained category retrieval of images in a collection annotated with coarse labels only and significantly improves the accuracy of category-level retrieval methods. Expand
Fine-Grained Image Analysis with Deep Learning: A Survey
  • Xiu-Shen Wei, Yi-Zhe Song, +5 authors Serge J. Belongie
  • Computer Science, Medicine
  • IEEE transactions on pattern analysis and machine intelligence
  • 2021
TLDR
A systematic survey of recent advances in deep learning powered FGIA is presented, where it attempts to re-define and broaden the field of FGIA by consolidating two fundamental fine-grained research areas -- fine- grained image recognition and fine-Grained image retrieval. Expand
When Does Contrastive Visual Representation Learning Work?
Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining onExpand

References

SHOWING 1-10 OF 56 REFERENCES
Learning a Mixture of Granularity-Specific Experts for Fine-Grained Categorization
TLDR
This work develops a unified framework based on a mixture of experts by combing an expert gradually-enhanced learning strategy and a Kullback–Leibler divergence based constraint to promote diversity among experts. Expand
Large Scale Fine-Grained Categorization and Domain-Specific Transfer Learning
TLDR
This work proposes a measure to estimate domain similarity via Earth Mover's Distance and demonstrates that transfer learning benefits from pre-training on a source domain that is similar to the target domain by this measure. Expand
Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection
TLDR
It is found that learning algorithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. Expand
Measuring the Intrinsic Dimension of Objective Landscapes
TLDR
Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where it is concluded that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. Expand
Cross-X Learning for Fine-Grained Visual Categorization
TLDR
This paper proposes Cross-X learning, a simple yet effective approach that exploits the relationships between different images and between different network layers for robust multi-scale feature learning and involves two novel components: a cross-category cross-semantic regularizer that guides the extracted features to represent semantic parts and aCross-layerRegularizer that improves the robustness of multi- scale features by matching the prediction distribution across multiple layers. Expand
Measures of Clustering Quality: A Working Set of Axioms for Clustering
TLDR
It is shown that principles like those formulated in Kleinberg's axioms can be readily expressed in the latter framework without leading to inconsistency, and several natural clustering quality measures are proposed, all satisfying the proposedAxioms. Expand
Class-Balanced Loss Based on Effective Number of Samples
TLDR
This work designs a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss and introduces a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. Expand
Exploring the Limits of Weakly Supervised Pretraining
TLDR
This paper presents a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images and shows improvements on several image classification and object detection tasks, and reports the highest ImageNet-1k single-crop, top-1 accuracy to date. Expand
Visualizing Data using t-SNE
We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of StochasticExpand
Bilinear CNN Models for Fine-Grained Visual Recognition
We propose bilinear models, a recognition architecture that consists of two feature extractors whose outputs are multiplied using outer product at each location of the image and pooled to obtain anExpand
...
1
2
3
4
5
...