BAYESIAN INVERSE PROBLEMS WITH GAUSSIAN PRIORS

@article{Knapik2011BAYESIANIP,
  title={BAYESIAN INVERSE PROBLEMS WITH GAUSSIAN PRIORS},
  author={Bartek Knapik and Aad van der Vaart and J. H. van Zanten},
  journal={Annals of Statistics},
  year={2011},
  volume={39},
  pages={2626-2657}
}
The posterior distribution in a nonparametric inverse problem is shown to contract to the true parameter at a rate that depends on the smoothness of the parameter, and the smoothness and scale of the prior. Correct combinations of these characteristics lead to the minimax rate. The frequentist coverage of credible sets is shown to depend on the combination of prior and true parameter, with smoother priors leading to zero coverage and rougher priors to conservative coverage. In the latter case… 

Figures from this paper

Bayesian inverse problems with partial observations

Bayesian Recovery of the Initial Condition for the Heat Equation

We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a

Bayesian linear inverse problems in regularity scales

We obtain rates of contraction of posterior distributions in inverse problems defined by scales of smoothness classes. We derive abstract results for general priors, with contraction rates determined

Bayesian inverse problems with unknown operators

We consider the Bayesian approach to linear inverse problems when the underlying operator depends on an unknown parameter. Allowing for finite dimensional as well as infinite dimensional parameters,

REGULARIZING PRIORS FOR LINEAR INVERSE PROBLEMS

This paper proposes a new Bayesian approach for estimating, nonparametrically, functional parameters in econometric models that are characterized as the solution of a linear inverse problem. By using

Adaptive Bayesian credible bands in regression with a Gaussian process prior

TLDR
It is shown that all methods lead to a posterior contraction rate that adapts to the smoothness of the true regression function, and that the corresponding credible sets cover thetrue regression function whenever this function satisfies a certain extrapolation condition.

Posterior consistency for Bayesian inverse problems through stability and regression results

TLDR
Posterior consistency justifies the use of the Bayesian approach very much in the same way as error bounds and convergence results for regularization techniques do and it is then of interest to show that the resulting sequence of posterior measures arising from this sequence of data concentrates around the truth used to generate the data.

Bayesian posterior contraction rates for linear severely ill-posed inverse problems

Abstract. We consider a class of linear ill-posed inverse problems arising from inversion of a compact operator with singular values which decay exponentially to zero. We adopt a Bayesian approach,

Designing truncated priors for direct and inverse Bayesian problems

Abstract: The Bayesian approach to inverse problems with functional unknowns, has received significant attention in recent years. An important component of the developing theory is the study of the
...

References

SHOWING 1-10 OF 39 REFERENCES

On the Bernstein-von Mises Theorem with Infinite Dimensional Parameters

If there are many independent, identically distributed observations governed by a smooth, finite-dimensional statistical model, the Bayes estimate and the maximum likelihood estimate will be close.

Bayesian inference with rescaled Gaussian process priors

TLDR
This work exhibits rescaled Gaussian process priors yielding posteriors that contract around the true parameter at optimal convergence rates and establishes bounds on small deviation probabilities for smooth stationary Gaussian processes.

Statistical Estimation and Optimal Recovery

TLDR
The method of proof exposes a correspondence between minimax affine estimates in the statistical estimation problem and optimal algorithms in the theory of optimal recovery.

Bernstein von Mises Theorems for Gaussian Regression with increasing number of regressors

This paper brings a contribution to the Bayesian theory of nonparametric and semiparametric estimation. We are interested in the asymptotic normality of the posterior distribution in Gaussian linear

On the Bernstein-von Mises phenomenon in the Gaussian white noise model

We study the Bernstein-von Mises (BvM) phenomenon, i.e., Bayesian credible sets and frequentist confidence regions for the estimation error coincide asymptotically, for the infinite-dimensional

Inverse problems: A Bayesian perspective

TLDR
The Bayesian approach to regularization is reviewed, developing a function space viewpoint on the subject, which allows for a full characterization of all possible solutions, and their relative probabilities, whilst simultaneously forcing significant modelling issues to be addressed in a clear and precise fashion.

Statistical Inverse Estimation in Hilbert Scales

TLDR
The recovery of signals from indirect measurements, blurred by random noise, is considered under the assumption that prior knowledge regarding the smoothness of the signal is avialable and the general problem is embedded in an abstract Hilbert scale.

On adaptive inverse estimation of linear functionals in Hilbert scales

We address the problem of estimating the value of a linear functional h f , xi from random noisy observations of y 1⁄4 Ax in Hilbert scales. Both the white noise and density observation models are

An Analysis of Bayesian Inference for Nonparametric Regression

  • D. Cox
  • Mathematics, Computer Science
  • 1993
TLDR
It is shown that the frequentist coverage probability of a variety of (1 - alpha) posterior probability regions tends to be larger than 1 - alpha, but will be infinitely often less than any epsilon 0 as n approaches infinity with prior probability 1.

Regularization methods for linear inverse problems

The primary di¢culty with linear ill-posed problems is that the inverse image is undetermined due to small (or zero) singular values of A. Actually the situation is a little worse in practice because