• Corpus ID: 240354227

Trace-class Gaussian priors for Bayesian learning of neural networks with MCMC

@inproceedings{Sell2020TraceclassGP,
  title={Trace-class Gaussian priors for Bayesian learning of neural networks with MCMC},
  author={Torben Sell and Sumeetpal S. Singh},
  year={2020}
}
ABSTRACT This paper introduces a new neural network based prior for real valued functions on R which, by construction, is more easily and cheaply scaled up in the domain dimension d compared to the usual Karhunen-Loève function space prior. The new prior is a Gaussian neural network prior, where each weight and bias has an independent Gaussian prior, but with the key difference that the variances decrease in the width of the network in such a way that the resulting function is almost surely… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 61 REFERENCES
Dimension-independent likelihood-informed MCMC
Consistency of Bayesian inference with Gaussian process priors in an elliptic inverse problem
TLDR
It is shown that, as the number N of measurements increases, the resulting posterior distributions concentrate around the true parameter generating the data, and derive a convergence rate N−λ, λ > 0, for the reconstruction error of the associated posterior means, in L2(O)-distance.
Besov priors for Bayesian inverse problems
We consider the inverse problem of estimating a function $u$ from noisy, possibly nonlinear, observations. We adopt a Bayesian approach to the problem. This approach has a long history for inversion,
Well-Posed Bayesian Inverse Problems with Infinitely Divisible and Heavy-Tailed Prior Measures
TLDR
It is established that well-posedness relies on a balance between the growth of the log-likelihood function and the tail behavior of the prior and apply the results to special cases such as additive noise models and linear problems.
MAP estimators and their consistency in Bayesian nonparametric inverse problems
We consider the inverse problem of estimating an unknown function u from noisy measurements y of a known, possibly nonlinear, map G?> applied to u. We adopt a Bayesian approach to the problem and
MCMC Methods for Functions: ModifyingOld Algorithms to Make Them Faster
TLDR
An approach to modifying a whole range of MCMC methods, applicable whenever the target measure has density with respect to a Gaussian process or Gaussian random field reference measure, which ensures that their speed of convergence is robust under mesh refinement.
Geometric MCMC for infinite-dimensional inverse problems
Regression and Classification Using Gaussian Process Priors
TLDR
Gaussian processes are in my view the simplest and most obvious way of defining flexible Bayesian regression and classification models, but despite some past usage, they appear to have been rather neglected as a general-purpose technique.
Bayesian Learning via Stochastic Gradient Langevin Dynamics
In this paper we propose a new framework for learning from large scale datasets based on iterative learning from small mini-batches. By adding the right amount of noise to a standard stochastic
Deep Gaussian Processes
TLDR
Deep Gaussian process (GP) models are introduced and model selection by the variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
...
...