Distribution Embedding Networks for Generalization from a Diverse Set of Classification Tasks
@inproceedings{Liu2022DistributionEN, title={Distribution Embedding Networks for Generalization from a Diverse Set of Classification Tasks}, author={Lang Liu and Mahdi Milani Fard and Sen Zhao}, year={2022} }
We propose Distribution Embedding Networks (DEN) for classification with small data. In the same spirit of meta-learning, DEN learns from a diverse set of training tasks with the goal to generalize to unseen target tasks. Unlike existing approaches which require the inputs of training and target tasks to have the same dimension with possibly similar distributions, DEN allows training and target tasks to live in heterogeneous input spaces. This is especially useful for tabular-data tasks where…
Figures and Tables from this paper
References
SHOWING 1-10 OF 52 REFERENCES
Learning to Propagate Labels: Transductive Propagation Network for Few-Shot Learning
- Computer ScienceICLR
- 2019
This paper proposes Transductive Propagation Network (TPN), a novel meta-learning framework for transductive inference that classifies the entire test set at once to alleviate the low-data problem.
Meta-learning from Tasks with Heterogeneous Attribute Spaces
- Computer ScienceNeurIPS
- 2020
We propose a heterogeneous meta-learning method that trains a model on tasks with various attribute spaces, such that it can solve unseen tasks whose attribute spaces are different from the training…
Generalizing to Unseen Domains: A Survey on Domain Generalization
- Computer ScienceIJCAI
- 2021
This paper provides a formal definition of domain generalization and discusses several related fields, and categorizes recent algorithms into three classes and present them in detail: data manipulation, representation learning, and learning strategy, each of which contains several popular algorithms.
Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets
- Computer ScienceICLR
- 2021
The proposed MetaD2A (Meta Dataset-to-Architecture) model can stochastically generate graphs from a given set (dataset) via a cross-modal latent space learned with amortized meta-learning and also proposes a meta-performance predictor to estimate and select the best architecture without direct training on target datasets.
Meta Networks
- Computer ScienceICML
- 2017
A novel meta learning method, Meta Networks (MetaNet), is introduced that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization.
Probabilistic Model-Agnostic Meta-Learning
- Computer ScienceNeurIPS
- 2018
This paper proposes a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution that is trained via a variational lower bound, and shows how reasoning about ambiguity can also be used for downstream active learning problems.
Learning to Compare: Relation Network for Few-Shot Learning
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
A survey on heterogeneous transfer learning
- Computer ScienceJournal of Big Data
- 2017
This paper contributes a comprehensive survey and analysis of current methods designed for performing heterogeneous transfer learning tasks to provide an updated, centralized outlook into current methodologies.
Domain Generalization: A Survey
- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2022
A comprehensive literature review in DG is provided to summarize the developments over the past decade and cover the background by formally defining DG and relating it to other relevant fields like domain adaptation and transfer learning.
Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks
- Computer ScienceICLR
- 2020
This work proposes a novel meta-learning model that adaptively balances the effect of the meta- learning and task-specific learning within each task and validates its Bayesian Task-Adaptive Meta-Learning on multiple realistic task- and class-imbalanced datasets.