• Corpus ID: 245335036

Bayesian neural network priors for edge-preserving inversion

@article{Li2021BayesianNN,
  title={Bayesian neural network priors for edge-preserving inversion},
  author={Chen Li and Matthew M. Dunlop and Georg Stadler},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.10663}
}
We consider Bayesian inverse problems wherein the unknown state is assumed to be a function with discontinuous structure a priori. A class of prior distributions based on the output of neural networks with heavy-tailed weights is introduced, motivated by existing results concerning the infinitewidth limit of such networks. We show theoretically that samples from such priors have desirable discontinuous-like properties even when the network width is finite, making them appropriate for edge… 

Figures and Tables from this paper

Multilevel Bayesian Deep Neural Networks
TLDR
This work develops multilevel Monte Carlo (MLMC) methods for Bayesian inference associated to deep neural networks and in particular, trace-class neural network (TNN) priors, developed as more robust alternatives to classical architectures in the context of inference problems.

References

SHOWING 1-10 OF 39 REFERENCES
Can one use total variation prior for edge-preserving Bayesian inversion?
Estimation of non-discrete physical quantities from indirect linear measurements is considered. Bayesian solution of such an inverse problem involves discretizing the problem and expressing available
Cauchy difference priors for edge-preserving Bayesian inversion
Abstract We consider inverse problems in which the unknown target includes sharp edges, for example interfaces between different materials. Such problems are typical in image reconstruction,
Besov priors for Bayesian inverse problems
We consider the inverse problem of estimating a function $u$ from noisy, possibly nonlinear, observations. We adopt a Bayesian approach to the problem. This approach has a long history for inversion,
Priors for Infinite Networks
In this chapter, I show that priors over network parameters can be defined in such a way that the corresponding priors over functions computed by the network reach reasonable limits as the number of
Cauchy Markov random field priors for Bayesian inversion
The use of Cauchy Markov random field priors in statistical inverse problems can potentially lead to posterior distributions which are non-Gaussian, high-dimensional, multimodal and heavy-tailed. In
Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors
This article extends the framework of Bayesian inverse problems in infinite-dimensional parameter spaces, as advocated by Stuart ( Acta Numer. 19:451–559,2010) and others, to the case of a
On the convergence of the Laplace approximation and noise-level-robustness of Laplace-based Monte Carlo methods for Bayesian inverse problems
TLDR
The Bayesian approach to inverse problems provides a rigorous framework for the incorporation and quantification of uncertainties in measurements, parameters and models and shows that Laplace-based importance sampling and La place-based quasi-Monte Carlo methods are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands.
Invertible generative models for inverse problems: mitigating representation error and dataset bias
TLDR
It is demonstrated that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting.
MCMC Methods for Functions: ModifyingOld Algorithms to Make Them Faster
TLDR
An approach to modifying a whole range of MCMC methods, applicable whenever the target measure has density with respect to a Gaussian process or Gaussian random field reference measure, which ensures that their speed of convergence is robust under mesh refinement.
Approximation of Bayesian Inverse Problems for PDEs
TLDR
This paper is based on an approach to regularization, employing a Bayesian formulation of the problem, which leads to a notion of well posedness for inverse problems, at the level of probability measures, which is used as the basis for quantifying the approximation of inverse problems for functions.
...
1
2
3
4
...