Corpus ID: 208637221

MetaFun: Meta-Learning with Iterative Functional Updates

@inproceedings{Xu2020MetaFunMW,
  title={MetaFun: Meta-Learning with Iterative Functional Updates},
  author={Jin Xu and Jean-Francois Ton and Hyunjik Kim and Adam R. Kosiorek and Yee Whye Teh},
  booktitle={ICML},
  year={2020}
}
We develop a functional encoder-decoder approach to supervised meta-learning, where labeled data is encoded into an infinite-dimensional functional representation rather than a finite-dimensional one. Furthermore, rather than directly producing the representation, we learn a neural update rule resembling functional gradient descent which iteratively improves the representation. The final representation is used to condition the decoder to make predictions on unlabeled data. Our approach is the… Expand
Meta Learning for Causal Direction
TLDR
This paper introduces a novel generative model that allows distinguishing cause and effect in the bivariate setting under limited observational data and proposes an end-to-end algorithm that makes use of similar training datasets at test time. Expand
Amortized Bayesian Prototype Meta-learning: A New Probabilistic Meta-learning Approach to Few-shot Image Classification
TLDR
This paper proposes a novel probabilistic metalearning method called amortized Bayesian prototype meta-learning, which learns to learn the posterior distributions of these latent prototypes in anAmortized inference way with no need for an extra amortization network, such that they can easily approximate their posteriors conditional on few labeled samples at meta-training or meta-testing stage. Expand
Few-shot Learning for Topic Modeling
TLDR
A neural network-based few-shot learning method that can learn a topic model from just a few documents using a set of multiple text corpora with an episodic training framework is proposed. Expand
Function Contrastive Learning of Transferable Meta-Representations
TLDR
A decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function and the representations obtained outperform strong baselines in terms of downstream performance and noise robustness. Expand
Gaussian Process Meta Few-shot Classifier Learning via Linear Discriminant Laplace Approximation
TLDR
This work considers the Bayesian Gaussian process (GP) approach, in which the meta-learns the GP prior, and the adaptation to a new task is carried out by the GP predictive model from the posterior inference. Expand
Group Equivariant Conditional Neural Processes
TLDR
A decomposition theorem for permutation-invariant and group-equivariant maps is given, which leads to construct EquivCNPs with an infinite-dimensional latent space to handle group symmetries and shows that EquivCNP with translation equivariance achieves comparable performance to conventional CNPs in a 1D regression task. Expand
Hierarchical Few-Shot Generative Models
TLDR
This work generalizes deep latent variable approaches to few-shot learning, taking a step towards large-scale few- shot generation with a formulation that readily can work with current state-of-the-art deep generative models. Expand
Learning to Rectify for Robust Learning with Noisy Labels
  • Haoliang Sun, Chenhui Guo, Qi Wei, Zhongyi Han, Yilong Yin
  • Computer Science
  • Pattern Recognition
  • 2021
TLDR
W warped probabilistic inference (WarPI) is proposed to achieve adaptively rectifying the training procedure for the classification network within the meta-learning scenario, demonstrating a significant improvement of the generalization ability. Expand
Meta-Learning for Koopman Spectral Analysis with Short Time-series
TLDR
A meta-learning method for estimating embedding functions from unseen short time- series by exploiting knowledge learned from related but different time-series is proposed and achieves better performance in terms of eigenvalue estimation and future prediction than existing methods. Expand
Meta-learning One-class Classifiers with Eigenvalue Solvers for Supervised Anomaly Detection
TLDR
The proposed neural network-based metalearning method for supervised anomaly detection improves the anomaly detection performance on unseen tasks, which contains a few labeled normal and anomalous instances, by meta-training with various datasets. Expand
...
1
2
...

References

SHOWING 1-10 OF 47 REFERENCES
Meta-Learning with Latent Embedding Optimization
TLDR
This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space. Expand
Meta-Learning for Semi-Supervised Few-Shot Classification
TLDR
This work proposes novel extensions of Prototypical Networks that are augmented with the ability to use unlabeled examples when producing prototypes, and confirms that these models can learn to improve their predictions due to unlabeling examples, much like a semi-supervised algorithm would. Expand
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learningExpand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
Meta-Learning With Differentiable Convex Optimization
TLDR
The objective is to learn feature embeddings that generalize well under a linear classification rule for novel categories and this work exploits two properties of linear classifiers: implicit differentiation of the optimality conditions of the convex problem and the dual formulation of the optimization problem. Expand
Optimization as a Model for Few-Shot Learning
Optimization as a model for fewshot learning
  • In International Conference on Learning Representations,
  • 2016
Attentive Neural Processes
TLDR
Attention is incorporated into NPs, allowing each input location to attend to the relevant context points for the prediction, which greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled. Expand
Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks
TLDR
This work presents an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set, and reduces the computation time of self-attention from quadratic to linear in the number of Elements in the set. Expand
Convolutional Conditional Neural Processes
TLDR
This work introduces the Convolutional Conditional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data, and demonstrates that any translation-equivariant embedding can be represented using a convolutional deep set. Expand
...
1
2
3
4
5
...