• Corpus ID: 235592810

Local convexity of the TAP free energy and AMP convergence for Z2-synchronization

@article{Celentano2021LocalCO,
  title={Local convexity of the TAP free energy and AMP convergence for Z2-synchronization},
  author={Michael Celentano and Zhou Fan and Song Mei},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.11428}
}
We study mean-field variational Bayesian inference using the TAP approach, for Z2-synchronization as a prototypical example of a high-dimensional Bayesian model. We show that for any signal strength λ > 1 (the weak-recovery threshold), there exists a unique local minimizer of the TAP free energy functional near the mean of the Bayes posterior law. Furthermore, the TAP free energy in a local neighborhood of this minimizer is strongly convex. Consequently, a natural-gradient/mirror-descent… 

Figures from this paper

Sudakov-Fernique post-AMP, and a new proof of the local convexity of the TAP free energy

TLDR
An asymptotic comparison inequality is derived, which is called the Sudakov-Fernique post-AMP inequality, which, in a certain class of problems involving a GOE matrix, is able to probe properties of an optimization landscape locally around the iterates of an approximate message passing (AMP) algorithm.

Approximate Message Passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization

TLDR
A BayesOAMP algorithm that uses as its non-linearity the posterior mean conditional on all preceding AMP iterates, which derives the forms of the Onsager debiasing coefficients and corresponding AMP state evolution, which depend on the free cumulants of the noise spectral distribution.

The TAP free energy for high-dimensional linear regression

TLDR
This work rigorously establishes the Thouless-Anderson-Palmer (TAP) approximation arising from spin glass theory, and proves a conjecture of [23] in the special case of the spherical prior (at sufficiently high temperature).

Minimum 𝓁1-norm interpolators: Precise asymptotics and multiple descent

TLDR
This paper considers the noisy sparse regression model under Gaussian design, focusing on linear sparsity and high-dimensional asymptotics (so that both the number of features and the sparsity level scale proportionally with the sample size), and provides rigorous theoretical justification for a curious multi-descent phenomenon.

Sampling from the Sherrington-Kirkpatrick Gibbs measure via algorithmic stochastic localization

TLDR
This work proves that, for any inverse temperature β < 1/2, there exists an algorithm with complexity O(n) that samples from a distribution μ which is close in normalized Wasserstein distance to μ, and introduces a suitable “stability” property for sampling algorithms, which is verified by many standard techniques.

References

SHOWING 1-10 OF 111 REFERENCES

Estimation of low-rank matrices via approximate message passing

TLDR
A practical algorithm is presented that can achieve Bayes-optimal accuracy above the spectral threshold and is used to derive detailed predictions for the problem of estimating a rank-one matrix in noise.

Mutual information in rank-one matrix estimation

TLDR
It is proved that the Bethe mutual information always yields an upper bound to the exact mutual information, using an interpolation method proposed by Guerra and later refined by Korada and Macris, in the case of rank-one symmetric matrix estimation.

Angular Synchronization by Eigenvectors and Semidefinite Programming.

  • A. Singer
  • Computer Science
    Applied and computational harmonic analysis
  • 2011

Vector approximate message passing

TLDR
This paper considers a “vector AMP” (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-rotationally invariant.

Iterative estimation of constrained rank-one matrices in noise

  • S. RanganA. Fletcher
  • Computer Science
    2012 IEEE International Symposium on Information Theory Proceedings
  • 2012
TLDR
This work considers the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix and proposes a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations.

Theoretical and Computational Guarantees of Mean Field Variational Inference for Community Detection

TLDR
The mean field method for community detection under the Stochastic Block Model has a linear convergence rate and converges to the minimax rate within $\log n$ iterations and similar optimality results for Gibbs sampling and an iterative procedure to calculate maximum likelihood estimation are obtained, which can be of independent interest.

Fundamental limits of symmetric low-rank matrix estimation

TLDR
This paper considers the high-dimensional inference problem where the signal is a low-rank symmetric matrix which is corrupted by an additive Gaussian noise and compute the limit in the large dimension setting for the mutual information between the signal and the observations, while the rank of the signal remains constant.

Generalized approximate message passing for estimation with random linear mixing

  • S. Rangan
  • Computer Science
    2011 IEEE International Symposium on Information Theory Proceedings
  • 2011
TLDR
G-AMP incorporates general measurement channels and shows that the asymptotic behavior of the G-AMP algorithm under large i.i.d. measurement channels is similar to the AWGN output channel case, and Gaussian transform matrices is described by a simple set of state evolution (SE) equations.

Non-Negative Principal Component Analysis: Message Passing Algorithms and Sharp Asymptotics

TLDR
This work proves that the estimation error undergoes a similar phase transition as the signal-to-noise ratio crosses a certain threshold, and proves that-unlike in the unconstrained case-the estimation error depends on the spike vector, and characterize the least favorable vectors.

A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications

TLDR
A framework which allows to circumvent the intricate question of Lipschitz continuity of gradients by using an elegant and easy to check convexity condition which captures the geometry of the constraints is introduced.
...