• Corpus ID: 204800893

Integrals over Gaussians under Linear Domain Constraints

@article{Gessner2020IntegralsOG,
  title={Integrals over Gaussians under Linear Domain Constraints},
  author={Alexandra Gessner and Oindrila Kanjilal and Philipp Hennig},
  journal={ArXiv},
  year={2020},
  volume={abs/1910.09328}
}
Integrals of linearly constrained multivariate Gaussian densities are a frequent problem in machine learning and statistics, arising in tasks like generalized linear models and Bayesian optimization. Yet they are notoriously hard to compute, and to further complicate matters, the numerical values of such integrals may be very small. We present an efficient black-box algorithm that exploits geometry for the estimation of integrals over a small, truncated Gaussian volume, and to simulate… 

Figures and Tables from this paper

Integral, mean and covariance of the simplex-truncated multivariate normal distribution
Compositional data, which is data consisting of fractions or probabilities, is common in many fields including ecology, economics, physical science and political science. If these data would
Bayesian conjugacy in probit, tobit, multinomial probit and extensions: A review and new results
TLDR
It is proved that the likelihoods induced by these formulations share a common analytical structure that implies conjugacy with a broad class of distributions, namely the unified skew–normals (sun), that generalize Gaussians to skewed contexts.
Skew Gaussian Processes for Classification
TLDR
This paper proposes Skew-Gaussian processes (SkewGPs) as a non-parametric prior over functions and verifies empirically that the proposed SkewGP classifier provides a better performance than a GP classifier based on either Laplace's method or Expectation Propagation.
Preferential Bayesian optimisation with skew gaussian processes
TLDR
It is proved that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP), with highly skewed pairwise marginals and, thus, it is shown that Laplace's method usually provides a very poor approximation.
Good Classifiers are Abundant in the Interpolating Regime
TLDR
The results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice, and that approaches based on the statistical mechanics of learning may offer a promising alternative.
Good linear classifiers are abundant in the interpolating regime
TLDR
The results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice, and that approaches based on the statistical mechanics of learning offer a promising alternative.
BayesCG As An Uncertainty Aware Version of CG
TLDR
This work’s CG-based implementation of BayesCG under a structure-exploiting prior distribution represents an ’uncertainty-aware’ version of CG that consists of CG iterates and posterior covariances that can be propagated to subsequent computations.
Elliptical Slice Sampling for Probabilistic Verification of Stochastic Systems with Signal Temporal Logic Specifications
TLDR
A method for probabilistic verification of linearizable systems with Gaussian and Gaussian mixture noise models (e.g. from perception modules, machine learning components) using a Markov Chain Monte-Carlo slice sampler.
A Unified Framework for Closed-Form Nonparametric Regression, Classification, Preference and Mixed Problems with Skew Gaussian Processes
TLDR
In other cases, such as classification, preference learning, ordinal regression and mixed problems, the likelihood is no longer conjugate to the GP prior and a closed-form expression for the posterior is not available.
Evaluating State-of-the-Art Classification Models Against Bayes Optimality
TLDR
The technique relies on a fundamental result, which states that the Bayes error is invariant under invertible transformation, and can be computed by computing it for Gaussian base distributions, which can be done efficiently using Holmes-Diaconis-Ross integration.
...
...

References

SHOWING 1-10 OF 62 REFERENCES
Estimating Orthant Probabilities of High-Dimensional Gaussian Vectors with An Application to Set Estimation
TLDR
This work focuses on the high-dimensional case and presents a two-step procedure relying on both deterministic and stochastic techniques to derive conservative estimates of excursion sets of expensive to evaluate deterministic functions under a Gaussian random field prior.
Finite-dimensional Gaussian approximation with linear inequality constraints
TLDR
The finite-dimensional Gaussian approach from Maatouk and Bay (2017) is considered which can satisfy inequality conditions everywhere and its full framework together with a Hamiltonian Monte Carlo-based sampler provides efficient results on both data fitting and uncertainty quantification.
Gaussian process modeling with inequality constraints
TLDR
This paper introduces a new framework for incorporating constraints in Gaussian process modeling, including bound, monotonicity and convexity constraints, and extends this framework to any type of linear constraint.
Parallel Bayesian Global Optimization of Expensive Functions
We consider parallel global optimization of derivative-free expensive-to-evaluate functions, and propose an efficient method based on stochastic approximation for implementing a conceptual Bayesian
The normal law under linear restrictions: simulation and estimation via minimax tilting
Simulation from the truncated multivariate normal distribution in high dimensions is a recurrent problem in statistical computing and is typically only feasible by using approximate Markov chain
Evaluation of Gaussian orthant probabilities based on orthogonal projections to subspaces
In this paper, a new procedure is described for evaluating the probability that all elements of a normally distributed vector are non-negative, which is called the non-centered orthant probability.
Elements of Sequential Monte Carlo
TLDR
This tutorial reviews sequential Monte Carlo, a random-sampling-based class of methods for approximate inference, and discusses the SMC estimate of the normalizing constant, how this can be used for pseudo-marginal inference and inference evaluation.
Approximations to Multivariate Normal Rectangle Probabilities Based on Conditional Expectations
Abstract Two new approximations for multivariate normal probabilities for rectangular regions, based on conditional expectations and regression with binary variables, are proposed. One is a
Computing Multivariate Normal Probabilities: A New Look
This article describes and compares several numerical methods for finding multivariate probabilities over a rectangle. A large computational study shows how the computation times depend on the
Fast and Exact Simulation of Multivariate Normal and Wishart Random Variables with Box Constraints
TLDR
This work introduces computationally efficient methods to make exact and independent draws from both the multivariate normal and Wishart distributions with box constraints, and improves the feasibility of Monte Carlo-based inference for box-constrained, multivariatenormal and wishart distributions.
...
...