• Corpus ID: 230799098

A unified performance analysis of likelihood-informed subspace methods

@inproceedings{Cui2021AUP,
  title={A unified performance analysis of likelihood-informed subspace methods},
  author={Tiangang Cui and Xin Tong},
  year={2021}
}
The likelihood-informed subspace (LIS) method offers a viable route to reducing the dimensionality of highdimensional probability distributions arising in Bayesian inference. LIS identifies an intrinsic low-dimensional linear subspace where the target distribution differs the most from some tractable reference distribution. Such a subspace can be identified using the leading eigenvectors of a Gram matrix of the gradient of the log-likelihood function. Then, the original high-dimensional target… 

Figures and Tables from this paper

Prior normalization for certified likelihood-informed subspace detection of Bayesian inverse problems
TLDR
A prior normalization technique that transforms any non-Gaussian priors into standard Gaussian distributions makes it possible to implement LIS methods to accelerate MCMC sampling via such transformations, and rigorously investigates the integration of such transformations with several MCMC methods for high-dimensional problems.
Data-free likelihood-informed dimension reduction of Bayesian inverse problems
TLDR
A novel gradient-based dimension reduction method in which the informed subspace does not depend on the data, which permits online–offline computational strategy where the expensive low-dimensional structure of the problem is detected in an offline phase, meaning before observing the data.
Gradient-based data and parameter dimension reduction for Bayesian models: an information theoretic perspective
TLDR
This work uses an information-theoretic analysis to derive a bound on the posterior error due to parameter and data dimension reduction and compares it with classical dimension reduction techniques, such as principal component analysis and canonical correlation analysis, on applications ranging from mechanics to image processing.
Efficient Derivative-free Bayesian Inference for Large-Scale Inverse Problems
. We consider Bayesian inference for large scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model. This renders most Markov
Conditional Deep Inverse Rosenblatt Transports
TLDR
A novel offline-online method to mitigate the computational burden of the characterization of conditional beliefs in statistical learning and presents novel heuristics to reorder and/or reparametrize the variables to enhance the approximation power of TT.
Sampling with Trusthworthy Constraints: A Variational Gradient Framework
TLDR
This work proposes a family of constrained sampling algorithms which generalize Langevin Dynamics and Stein Variational Gradient Descent to incorporate a moment constraint specified by a general nonlinear function, and derives two types of algorithms for handling constraints, including a primal-dual gradient approach and the constraint controlled gradient descent approach.
Bayesian, frequentist, and information geometric approaches to parametric uncertainty quantification of classical empirical interatomic potentials.
TLDR
It is shown how information geometry can motivate new, natural parameterizations that improve the stability and interpretation of uncertainty quantification analysis and further suggest simplified, less-sloppy models.
Deep Composition of Tensor Trains using Squared Inverse Rosenblatt Transports
  • T. Cui, S. Dolgov
  • Computer Science
    Foundations of Computational Mathematics
  • 2021
TLDR
The proposed order-preserving functional tensor-train transport is integrated into a nested variable transformation framework inspired by the layered structure of deep neural networks and significantly expands the capability of tensor approximations and transport maps to random variables with complicated nonlinear interactions and concentrated density functions.
SPECTRAL GAP OF REPLICA EXCHANGE LANGEVIN DIFFUSION ON MIXTURE DISTRIBUTIONS
TLDR
It is shown ReLD can obtain constant or better convergence rates even when the density components of the mixture concentrate around isolated modes, and using mReLD with K additional LDs can achieve the same result while the exchange frequency only needs to be (1/K)-th power of the one in ReLD.
AEOLUS: Advances in Experimental Design, Optimal Control, and Learning for Uncertain Complex Systems Center Progress Report
  • Mathematics
  • 2019
4 Learning from data: Low-dimensional modeling and reduced models 6 4.1 Lift & Learn: Learning low-dimensional models for an additive manufacturing solidification process . . . . . . . . . . . . . .

References

SHOWING 1-10 OF 72 REFERENCES
Optimal Low-rank Approximations of Bayesian Linear Inverse Problems
TLDR
Two fast approximations of the posterior mean are proposed and proved optimality with respect to a weighted Bayes risk under squared-error loss and the Hessian of the negative log-likelihood and the prior precision are proved.
Data-free likelihood-informed dimension reduction of Bayesian inverse problems
TLDR
A novel gradient-based dimension reduction method in which the informed subspace does not depend on the data, which permits online–offline computational strategy where the expensive low-dimensional structure of the problem is detected in an offline phase, meaning before observing the data.
Certified dimension reduction in nonlinear Bayesian inverse problems
TLDR
A dimension reduction technique for Bayesian inverse problems with nonlinear forward operators, non-Gaussian priors, and non- Gaussian observation noise is proposed and an analysis that enables control of the posterior approximation error due to this sampling is provided.
Localization for MCMC: sampling high-dimensional posterior distributions with local structure
Fast Algorithms for Bayesian Uncertainty Quantification in Large-Scale Linear Inverse Problems Based on Low-Rank Partial Hessian Approximations
TLDR
It is demonstrated that the number of forward PDE solves required for an accurate low-rank approximation is independent of the problem dimension, which permits scalable estimation of the uncertainty in large-scale ill-posed linear inverse problems at a small multiple of the cost of solving the forward problem.
Unbiased Monte Carlo: Posterior estimation for intractable/infinite-dimensional models
We provide a general methodology for unbiased estimation for intractable stochastic models. We consider situations where the target distribution can be written as an appropriate limit of
Greedy inference with layers of lazy maps
TLDR
A framework for the greedy approximation of high-dimensional Bayesian inference problems, through the composition of multiple transport maps or flows, and proves weak convergence of the approach to the posterior distribution.
Inference via Low-Dimensional Couplings
TLDR
This paper establishes a link between the Markov properties of the target measure and the existence of low-dimensional couplings, induced by transport maps that are sparse and/or decomposable, and suggests new inference methodologies for continuous non-Gaussian graphical models.
Geometric MCMC for infinite-dimensional inverse problems
MALA-within-Gibbs Samplers for High-Dimensional Distributions with Sparse Conditional Structure
TLDR
It is shown that the acceptance ratio and step size of this MCMC sampler are independent of the overall problem dimension when (i) the target distribution has sparse conditional structure, and (ii) this structure is reflected in the partial updating strategy of MALA-within-Gibbs.
...
...