• Corpus ID: 230437879

Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization

@article{Plug2021MetaLearningCP,
  title={Meta-Learning Conjugate Priors for Few-Shot Bayesian Optimization},
  author={Ruduan Plug},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.00729}
}
  • Ruduan Plug
  • Published 3 January 2021
  • Computer Science
  • ArXiv
Bayesian Optimization is methodology used in statistical modelling that utilizes a Gaussian process prior distribution to iteratively update a posterior distribution towards the true distribution of the data. Finding unbiased informative priors to sample from is challenging and can greatly influence the outcome on the posterior distribution if only few data are available. In this paper we propose a novel approach to utilize metalearning to automate the estimation of informative conjugate prior… 

Figures from this paper

References

SHOWING 1-10 OF 23 REFERENCES

Meta-Learning Priors for Efficient Online Bayesian Regression

The proposed ALPaCA is found to be a promising plug-in tool for many regression tasks in robotics where scalability and data-efficiency are important, and outperforms kernel-based GP regression, as well as state of the art meta-learning approaches.

Scalable Meta-Learning for Bayesian Optimization

An ensemble model is developed that can incorporate the results of past optimization runs, while avoiding the poor scaling that comes with putting all results into a single Gaussian process model.

An Exploration of Acquisition and Mean Functions in Variational Bayesian Monte Carlo

The performance of VBMC under variations of two key components of the framework is studied, and a new general family of acquisition functions for active sampling is proposed and evaluated, which includes as special cases the acquisition functions used in the original work.

Practical Bayesian Optimization of Machine Learning Algorithms

This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.

ALPaCA vs. GP-based Prior Learning: A Comparison between two Bayesian Meta-Learning Algorithms

This paper investigates the similarities and disparities among two recently published bayesian meta-learning methods: ALPaCA and PACOH and provides theoretical analysis as well as empirical benchmarks across synthetic and real-world dataset.

Bayesian analysis of multistate event history data: beta-Dirichlet process prior

Bayesian analysis of a finite state Markov process, which is popularly used to model multistate event history data, is considered. A new prior process, called a beta-Dirichlet process, is introduced

Multi-task Sparse Learning with Beta Process Prior for Action Recognition

A Beta process is introduced prior to the hierarchical MTSL model, which efficiently learns a compact dictionary and infers the sparse structure shared across all the tasks, which enforces the robustness in coefficient estimation compared with performing each task independently.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Provable Guarantees for Gradient-Based Meta-Learning

This paper develops a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods, and is the first to simultaneously satisfy good sample efficiency guarantees in the convex setting and generalization bounds that improve with task-similarity.