• Corpus ID: 254562281

Distribution Embedding Networks for Generalization from a Diverse Set of Classification Tasks

  title={Distribution Embedding Networks for Generalization from a Diverse Set of Classification Tasks},
  author={Lang Liu and Mahdi Milani Fard and Sen Zhao},
We propose Distribution Embedding Networks (DEN) for classification with small data. In the same spirit of meta-learning, DEN learns from a diverse set of training tasks with the goal to generalize to unseen target tasks. Unlike existing approaches which require the inputs of training and target tasks to have the same dimension with possibly similar distributions, DEN allows training and target tasks to live in heterogeneous input spaces. This is especially useful for tabular-data tasks where… 



Learning to Propagate Labels: Transductive Propagation Network for Few-Shot Learning

This paper proposes Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.

Meta-learning from Tasks with Heterogeneous Attribute Spaces

We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training

Generalizing to Unseen Domains: A Survey on Domain Generalization

This paper provides a formal definition of domain generalization and discusses several related fields, and categorizes recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy, each of which contains several popular algorithms.

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets

The proposed MetaD2A (Meta Dataset-to-Architecture) model can stochastically generate graphs from a given set (dataset) via a cross-modal latent space learned with amortized meta-learning and also proposes a meta-performance predictor to estimate and select the best architecture without direct training on target datasets.

Meta Networks

A novel meta learning method, Meta Networks (MetaNet), is introduced that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.

Probabilistic Model-Agnostic Meta-Learning

This paper proposes a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution that is trained via a variational lower bound, and shows how reasoning about ambiguity can also be used for downstream active learning problems.

Learning to Compare: Relation Network for Few-Shot Learning

A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.

A survey on heterogeneous transfer learning

This paper contributes a comprehensive survey and analysis of current methods designed for performing heterogeneous transfer learning tasks to provide an updated, centralized outlook into current methodologies.

Domain Generalization: A Survey

A comprehensive literature review in DG is provided to summarize the developments over the past decade and cover the background by formally defining DG and relating it to other relevant fields like domain adaptation and transfer learning.

Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks

This work proposes a novel meta-learning model that adaptively balances the effect of the meta- learning and task-specific learning within each task and validates its Bayesian Task-Adaptive Meta-Learning on multiple realistic task- and class-imbalanced datasets.