Bayesian posterior contraction rates for linear severely ill-posed inverse problems

@article{Agapiou2013BayesianPC,
  title={Bayesian posterior contraction rates for linear severely ill-posed inverse problems},
  author={Sergios Agapiou and Andrew M. Stuart and Yuan-Xiang Zhang},
  journal={Journal of Inverse and Ill-posed Problems},
  year={2013},
  volume={22},
  pages={297 - 321}
}
Abstract. We consider a class of linear ill-posed inverse problems arising from inversion of a compact operator with singular values which decay exponentially to zero. We adopt a Bayesian approach, assuming a Gaussian prior on the unknown function. The observational noise is assumed to be Gaussian; as a consequence the prior is conjugate to the likelihood so that the posterior distribution is also Gaussian. We study Bayesian posterior consistency in the small observational noise limit. We… 

Figures from this paper

Posterior Contraction in Bayesian Inverse Problems Under Gaussian Priors

TLDR
This work reviews and re-derive several existing results, and establishes minimax contraction rates in cases which have not been considered until now, and shows how to overcome saturation in an empirical Bayesian framework by using a non-centered data-dependent prior.

Consistency of the posterior distribution in generalized linear inverse problems

For ill-posed inverse problems, a regularized solution can be interpreted as a mode of the posterior distribution in a Bayesian framework. This framework enriches the set of possible solutions, as

Preconditioning the prior to overcome saturation in Bayesian inverse problems

TLDR
The approach allows to obtain and, using preconditioning improve after saturation, minimax rates of contraction established in previous studies, and establishes minimax contraction rates in cases which have not been considered so far.

Some results on contraction rates for Bayesian inverse problems

We prove a general lemma for deriving contraction rates for linear inverse problems with non parametric nonconjugate priors. We then apply it to Gaussian priors and obtain minimax rates in mildly

Posterior consistency for Bayesian inverse problems through stability and regression results

TLDR
Posterior consistency justifies the use of the Bayesian approach very much in the same way as error bounds and convergence results for regularization techniques do and it is then of interest to show that the resulting sequence of posterior measures arising from this sequence of data concentrates around the truth used to generate the data.

Bayesian inverse problems with unknown operators

We consider the Bayesian approach to linear inverse problems when the underlying operator depends on an unknown parameter. Allowing for finite dimensional as well as infinite dimensional parameters,

Bernstein-von Mises Theorems and Uncertainty Quantification for Linear Inverse Problems

TLDR
It is proved that semiparametric posterior estimation and uncertainty quantification are valid and optimal from a frequentist point of view, and frequentist guarantees for certain credible balls centred at $\bar{f}$ are derived.

A general approach to posterior contraction in nonparametric inverse problems

In this paper we propose a general method to derive an upper bound for the contraction rate of the posterior distribution for nonparametric inverse problems. We present a general theorem that allows

Posterior consistency and convergence rates for Bayesian inversion with hypoelliptic operators

The Bayesian approach to inverse problems is studied in the case where the forward map is a linear hypoelliptic pseudodifferential operator and measurement error is additive white Gaussian noise. The

On the Bernstein-Von Mises Theorem for High Dimensional Nonlinear Bayesian Inverse Problems

We prove a Bernstein-von Mises theorem for a general class of high dimensional nonlinear Bayesian inverse problems in the vanishing noise limit. We propose a sufficient condition on the growth rate

References

SHOWING 1-10 OF 21 REFERENCES

BAYESIAN INVERSE PROBLEMS WITH GAUSSIAN PRIORS

The posterior distribution in a nonparametric inverse problem is shown to contract to the true parameter at a rate that depends on the smoothness of the parameter, and the smoothness and scale of the

Posterior convergence for approximated unknowns in non-Gaussian statistical inverse problems

In practical statistical inverse problems, one often considers only finite-dimensional unknowns and investigates numerically their posterior probabilities. As many unknowns are function-valued, it is

Non-Gaussian statistical inverse problems. Part I: Posterior distributions

One approach to noisy inverse problems is to use Bayesian methods. In this work, the statistical inverse problem of estimating the probability distribution of an infinite-dimensional unknown given

Bayesian Recovery of the Initial Condition for the Heat Equation

We study a Bayesian approach to recovering the initial condition for the heat equation from noisy observations of the solution at a later time. We consider a class of prior distributions indexed by a

Inverse problems: A Bayesian perspective

TLDR
The Bayesian approach to regularization is reviewed, developing a function space viewpoint on the subject, which allows for a full characterization of all possible solutions, and their relative probabilities, whilst simultaneously forcing significant modelling issues to be addressed in a clear and precise fashion.

Regularization of exponentially ill-posed problems

Linear and nonlinear inverse problems which are exponentially ill-posed arise in heat conduction, satellite gradiometry, potential theory and scattering theory. For these problems logarithmic source

Linear inverse problems for generalised random variables

In a statistical inverse theory both the unknown quantity and the measurement are random variables. The solution of the inverse problem is then the conditional distribution of the unknown variable

Nonparametric statistical inverse problems

We explain some basic theoretical issues regarding nonparametric statistics applied to inverse problems. Simple examples are used to present classical concepts such as the white noise model, risk