On universal Bayesian adaptation

  title={On universal Bayesian adaptation},
  author={J{\"u}ri Lember and Aad van der Vaart},
  journal={Statistics and Risk Modeling},
We consider estimating a probability density p based on a random sample from this density by a Bayesian approach. The prior is constructed in two steps, by first constructing priors on a collection of models each expressing a qualitative prior guess on the true density, and next combining these priors in an overall prior by attaching prior weights to the models. The purpose is to show that the posterior distribution contracts to the true distribution at a rate that is (nearly) equal to the rate… 

Nonparametric Bayesian model selection and averaging

A general theorem is presented on the rate of contraction of the resulting posterior distribution as n, which gives conditions under which the rates depend in a complicated way on the priors, but also that the rate is fairly robust to specification of the prior weights.

On adaptive Bayesian inference

We study the rate of Bayesian consistency for hierarchical priors consisting of prior weights on a model index set and a prior on a density model for each choice of model index. Ghosal, Lember and

Posterior contraction for conditionally Gaussian priors

This thesis considers two examples of a conditionally Gaussian process for the construction of a prior distribution on certain statistical models indexed by a function and shows that it is possible to construct Gaussian priors from either the spline functions or the kernel mixtures which actually achieve posterior contraction at a near optimal rate.

Empirical Bayes scaling of Gaussian priors in the white noise model

The behavior of the random hyperparameter is characterized, and it is shown that a nonparametric Bayes method using it gives optimal recovery over a scale of regularity classes.

Asymptotic results in nonparametric Bayesian function estimation

This thesis considers function estimation problems in two different statistical settings and develops adaptive Bayesian procedures and derives contraction rates for these procedures and shows that they are optimal in a minimax sense.

Oracle posterior contraction rates under hierarchical priors

A general Bayes theoretic framework to derive posterior contraction rates under a hierarchical prior design is offered, which serves either as theoretical justification of practical prior proposals in the literature, or as an illustration of the generic construction scheme of a (nearly) minimax adaptive estimator for a complicated experiment.

Adaptive Bayesian estimation using a Gaussian random field with inverse Gamma bandwidth

It is proved that the resulting posterior distribution shrinks to the distribution that generates the data at a speed which is minimax-optimal up to a logarithmic factor, whatever the regularity level of the data-generating distribution.

Bayes factor consistency

This article focuses on the consistency property of the Bayes factor, a commonly used model comparison tool, which has experienced a recent surge of attention in the literature and adopts the view that a unified framework has didactic value.

Besov priors in density estimation: optimal posterior contraction rates and adaptation

Besov priors are nonparametric priors that model spatially inhomogeneous functions. They are routinely used in inverse problems and imaging, where they exhibit attractive sparsity-promoting and

Fundamentals of Nonparametric Bayesian Inference

This authoritative text draws on theoretical advances of the past twenty years to synthesize all aspects of Bayesian nonparametrics, from prior construction to computation and large sample behavior of posteriors, making it valuable for both graduate students and researchers in statistics and machine learning.



On Bayesian Adaptation

We show that Bayes estimators of an unknown density can adapt to unknown smoothness of the density. We combine prior distributions on each element of a list of log spline density models of different

Density estimation by wavelet thresholding

Density estimation is a commonly used test case for nonparametric estimation methods. We explore the asymptotic properties of estimators based on thresholding of empirical wavelet coefficients.

Risk bounds for model selection via penalization

It is shown that the quadratic risk of the minimum penalized empirical contrast estimator is bounded by an index of the accuracy of the sieve, which quantifies the trade-off among the candidate models between the approximation error and parameter dimension relative to sample size.

Convergence rates for posterior distributions and adaptive estimation

The goal of this paper is to provide theorems on convergence rates of posterior distributions that can be applied to obtain good convergence rates in the context of density estimation as well as

Wavelet Shrinkage: Asymptopia?

A method for curve estimation based on n noisy data: translate the empirical wavelet coefficients towards the origin by an amount √(2 log n) /√n and draw loose parallels with near optimality in robustness and also with the broad near eigenfunction properties of wavelets themselves.

Convergence rates of posterior distributions

We consider the asymptotic behavior of posterior distributions and Bayes estimators for infinite-dimensional statistical models. We give general results on the rate of convergence of the posterior

Minimum complexity density estimation

An index of resolvability is proved to bound the rate of convergence of minimum complexity density estimators as well as the information-theoretic redundancy of the corresponding total description length to demonstrate the statistical effectiveness of the minimum description-length principle as a method of inference.

Asymptotic Methods In Statistical Decision Theory

1 Experiments-Decision Spaces.- 1 Introduction.- 2 Vector Lattices-L-Spaces-Transitions.- 3 Experiments-Decision Procedures.- 4 A Basic Density Theorem.- 5 Building Experiments from Other Ones.- 6

Entropy and "-capacity of sets in func-tional spaces

The article is mainly devoted to the systematic exposition of results that were published in the years 1954–1958 by K. I. Babenko [1], A. G. Vitushkin [2,3], V. D. Yerokhin [4], A. N. Kolmogorov

Weak Convergence and Empirical Processes: With Applications to Statistics

This chapter discusses Convergence: Weak, Almost Uniform, and in Probability, which focuses on the part of Convergence of the Donsker Property which is concerned with Uniformity and Metrization.