• Corpus ID: 227261667

Are We Overfitting to Experimental Setups in Recognition

@article{Wallingford2020AreWO,
  title={Are We Overfitting to Experimental Setups in Recognition},
  author={Matthew Wallingford and Aditya Kusupati and Keivan Alizadeh-Vahid and Aaron Walsman and Aniruddha Kembhavi and Ali Farhadi},
  journal={arXiv: Computer Vision and Pattern Recognition},
  year={2020}
}
Enabling robust intelligence in the real-world entails systems that offer continuous inference while learning from varying amounts of data and supervision. The machine learning community has organically broken down this challenging goal into manageable sub-tasks such as supervised, few-shot, and continual learning. In light of substantial progress on each sub-task, we pose the question, "How well does this progress translate to more practical scenarios?" To investigate this question, we… 
Variable-Shot Adaptation for Online Meta-Learning
TLDR
On sequential learning problems, meta-learning solves the full task set with fewer overall labels and achieves greater cumulative performance, compared to standard supervised methods, suggesting that meta- learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches
Meta and transfer learning are two successful families of approaches to few-shot learning. Despite highly related goals, state-of-the-art advances in each family are measured largely in isolation of
CoMPS: Continual Meta Policy Search
TLDR
This work develops a new continual meta-learning method, CoMPS, that outperforms prior continual learning and off-policy meta-reinforcement methods on several sequences of challenging continuous control tasks.
LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes
TLDR
This work proposes a novel method for Learning Low-dimensional binary Codes (LLC) for instances as well as classes that is super-efficient while still ensuring nearly optimal classification accuracy for ResNet50 on ImageNet-1K and captures intrinsically important features in the data by discovering an intuitive taxonomy over classes.

References

SHOWING 1-10 OF 67 REFERENCES
Few-Shot Learning With Localization in Realistic Settings
TLDR
Three parameter-free improvements are introduced that double the accuracy of state-of-the-art models on meta-iNat while generalizing to prior benchmarks, complex neural architectures, and settings with substantial domain shift.
Incremental Learning in Online Scenario
TLDR
This paper proposes an incremental learning framework that can work in the challenging online learning scenario and handle both new classes data and new observations of old classes and demonstrates a real life application of online food image classification based on the complete framework using the Food-101 dataset.
GDumb: A Simple Approach that Questions Our Progress in Continual Learning
We discuss a general formulation for the Continual Learning (CL) problem for classification—a learning task where a stream provides samples to a learner and the goal of the learner, depending on the
TADAM: Task dependent adaptive metric for improved few-shot learning
TLDR
This work identifies that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms and proposes and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space.
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types.
Optimization as a Model for Few-Shot Learning
Meta-Transfer Learning for Few-Shot Learning
TLDR
A novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks and introduces the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL.
Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
TLDR
It is shown that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, followed by training a linear classifier on top of this representation, outperforms state-of-the-art few-shot learning methods.
Recent Advances in Autoencoder-Based Representation Learning
TLDR
An in-depth review of recent advances in representation learning with a focus on autoencoder-based models and makes use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features.
Prototypical Networks for Few-shot Learning
TLDR
This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.
...
1
2
3
4
5
...