Corpus ID: 237490399

Online Unsupervised Learning of Visual Representations and Categories

  title={Online Unsupervised Learning of Visual Representations and Categories},
  author={Mengye Ren and Tyler R. Scott and Michael L. Iuzzolino and Michael C. Mozer and Richard S. Zemel},
Real world learning scenarios involve a nonstationary distribution of classes with sequential dependencies among the samples, in contrast to the standard machine learning formulation of drawing samples independently from a fixed, typically uniform distribution. Furthermore, real world interactions demand learning on-the-fly from few or no class labels. In this work, we propose an unsupervised model that simultaneously performs online visual representation learning and few-shot learning of new… Expand


Continual Unsupervised Representation Learning
The proposed approach (CURL) performs task inference directly within the model, is able to dynamically expand to capture new concepts over its lifetime, and incorporates additional rehearsal-based techniques to deal with catastrophic forgetting. Expand
Matching Networks for One Shot Learning
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Dynamic Few-Shot Visual Learning Without Forgetting
This work proposes to extend an object recognition system with an attention based few-shot classification weight generator, and to redesign the classifier of a ConvNet model as the cosine similarity function between feature representations and classification weight vectors. Expand
Meta-Learning for Semi-Supervised Few-Shot Classification
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would. Expand
A Bayesian approach to unsupervised one-shot learning of object categories
This work presents a method for learning object categories from just a few images, based on incorporating "generic" knowledge which may be obtained from previously learnt models of unrelated categories, in a variational Bayesian framework. Expand
Unsupervised Meta-Learning for Few-Shot Image Classification
UMTRA is proposed, an algorithm that performs unsupervised, model-agnostic meta-learning for classification tasks, and trades off some classification accuracy for a reduction in the required labels of several orders of magnitude. Expand
Incremental Few-Shot Learning with Attention Attractor Networks
A meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes, and it is demonstrated that the learned attractor network can help recognize novel classes while remembering old classes without the need to review the original training set. Expand
Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning via Random Labels and Data Augmentation
A method, named Assume, Augment and Learn or AAL, for generating few-shot tasks using unlabeled data, which achieves good generalization performance in a variety of established few- shot learning tasks on Omniglot and Mini-Imagenet. Expand
Self-Supervised Representation Learning from Flow Equivariance
This work presents a new self-supervised learning representation framework that can be directly deployed on a video stream of complex scenes with many moving objects and is able to outperform representations obtained from previous state-of-the-art methods including SimCLR and BYOL. Expand
Self-labelling via simultaneous clustering and representation learning
The proposed novel and principled learning formulation is able to self-label visual data so as to train highly competitive image representations without manual labels and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Expand