• Corpus ID: 249375545

Mean field approximations via log-concavity

@inproceedings{Lacker2022MeanFA,
  title={Mean field approximations via log-concavity},
  author={Daniel Lacker and Sumit Mukherjee and Lane Chun Yeung},
  year={2022}
}
We propose a new approach to deriving quantitative mean field approximations for any probability measure P on R with density proportional to e, for f strongly concave. We bound the mean field approximation for the log partition function log ∫ edx in terms of ∑ i6=j EQ∗ |∂ijf | , for a semi-explicit probability measure Q characterized as the unique mean field optimizer, or equivalently as the minimizer of the relative entropy H(· |P ) over product measures. This notably does not involve metric… 

Approximately optimal distributed stochastic controls beyond the mean field setting

We study high-dimensional stochastic optimal control problems in which many agents cooperate to minimize a convex cost functional. We consider both the full-information problem, in which each agent

References

SHOWING 1-10 OF 55 REFERENCES

Mean-field approximation, convex hierarchies, and the optimality of correlation rounding: a unified perspective

The techniques with spin glass theory are combined to prove (in a strong sense) the optimality of correlation rounding, refuting a recent conjecture of Allen, O’Donnell, and Zhou.

Variational Inference in high-dimensional linear regression

This work derives a limiting infinite dimensional variational formula for the log normalizing constant of the posterior distribution of high-dimensional Bayesian linear regression with product priors and establishes that under an additional “separation" condition, the variational problem has a unique optimizer, and this optimizer governs the probabilistic properties ofThe posterior distribution.

Variational Inference: A Review for Statisticians

Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.

The Entropy Per Coordinate of a Random Vector is Highly Constrained Under Convexity Conditions

New constraints on entropy per coordinate are given for so-called convex or hyperbolic probability measures on Euclidean spaces, which generalize the results under the log-concavity assumption, expose the extremal role of multivariate Pareto-type distributions, and give some applications.

Theoretical and Computational Guarantees of Mean Field Variational Inference for Community Detection

The mean field method for community detection under the Stochastic Block Model has a linear convergence rate and converges to the minimax rate within $\log n$ iterations and similar optimality results for Gibbs sampling and an iterative procedure to calculate maximum likelihood estimation are obtained, which can be of independent interest.

On the properties of variational approximations of Gibbs posteriors

The main finding is that such a variational approximation of the Gibbs posterior has often the same rate of convergence as the original PAC-Bayesian procedure it approximates.

Graphical Models, Exponential Families, and Variational Inference

The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in large-scale statistical models.

Nonlinear large deviations: Beyond the hypercube

  • Jun Yan
  • Mathematics, Computer Science
    The Annals of Applied Probability
  • 2020
This work presents a framework to calculate large deviations for nonlinear functions of independent random variables supported on compact sets in Banach spaces, and proves the mathematical rigor of the mean field approximation method for a class of spin vector models.

Log-Concavity and Strong Log-Concavity: a review.

A new proof of Efron's theorem is provided using the recent asymmetric Brascamp-Lieb inequality due to Otto and Menz (2013) and along the way connections between log-concavity and other areas of mathematics and statistics are reviewed.

Decomposition of mean-field Gibbs distributions into product measures

We show that under a low complexity condition on the gradient of a Hamiltonian, Gibbs distributions on the Boolean hypercube are approximate mixtures of product measures whose probability vectors are
...