# On the glassy nature of the hard phase in inference problems

@article{Antenucci2019OnTG, title={On the glassy nature of the hard phase in inference problems}, author={Fabrizio Antenucci and Silvio Franz and Pierfrancesco Urbani and Lenka Zdeborov{\'a}}, journal={ArXiv}, year={2019}, volume={abs/1805.05857} }

An algorithmically hard phase was described in a range of inference problems: even if the signal can be reconstructed with a small error from an information theoretic point of view, known algorithms fail unless the noise-to-signal ratio is sufficiently small. This hard phase is typically understood as a metastable branch of the dynamical evolution of message passing algorithms. In this work we study the metastable branch for a prototypical inference problem, the low-rank matrix factorization…

## 23 Citations

### Marvels and Pitfalls of the Langevin Algorithm in Noisy High-dimensional Inference

- Computer SciencePhysical Review X
- 2020

The results show that the algorithmic threshold of the Langevin algorithm is sub-optimal with respect to the one given by AMP, and conjecture this phenomenon to be due to the residual glassiness present in that region of parameters.

### The price of ignorance: how much does it cost to forget noise structure in low-rank matrix estimation?

- Computer ScienceArXiv
- 2022

. We consider the problem of estimating a rank- 1 signal corrupted by structured rotationally invariant noise, and address the following question: how well do inference algorithms perform when the…

### Passed & Spurious: Descent Algorithms and Local Minima in Spiked Matrix-Tensor Models

- Computer ScienceICML
- 2019

This work analyses quantitatively the interplay between the loss landscape and performance of descent algorithms in a prototypical inference problem, the spiked matrix-tensor model, and evaluates in a closed form the performance of a gradient flow algorithm using integro-differential PDEs as developed in physics of disordered systems for the Langevin dynamics.

### Passed & Spurious: analysing descent algorithms and local minima in spiked matrix-tensor model

- Computer ScienceICML 2019
- 2019

This work analyses quantitatively the interplay between the loss landscape and performance of descent algorithms in a prototypical inference problem, the spiked matrix-tensor model, and evaluates in a closed form the performance of a gradient flow algorithm using integro-differential PDEs as developed in physics of disordered systems for the Langevin dynamics.

### Phase transitions in spiked matrix estimation: information-theoretic analysis

- Computer ScienceArXiv
- 2018

The minimal mean squared error is computed for the estimation of the low-rank signal and it is compared to the performance of spectral estimators and message passing algorithms.

### The Franz-Parisi Criterion and Computational Trade-offs in High Dimensional Statistics

- Computer ScienceArXiv
- 2022

This paper formally connects a free-energy based criterion for hardness and formally establishes that for Gaussian additive models the “algebraic” notion of low-degree hardness implies failure of “geometric” local MCMC algorithms, and provides new low- degree lower bounds for sparse linear regression.

### Generalized approximate survey propagation for high-dimensional estimation

- Computer ScienceICML
- 2019

A new algorithm, named generalized approximate survey propagation (GASP), is proposed for solving GLE in the presence of prior or model mis-specifications, and it is shown that GASP outperforms the corresponding GAMP, reducing the reconstruction threshold and, for certain choices of its parameters, approaching Bayesian optimal performance.

### Mean-field inference methods for neural networks

- Computer ScienceJournal of Physics A: Mathematical and Theoretical
- 2020

A selection of classical mean-field methods and recent progress relevant for inference in neural networks are reviewed, and the principles of derivations of high-temperature expansions, the replica method and message passing algorithms are reminded, highlighting their equivalences and complementarities.

### Efficient approximation of branching random walk Gibbs measures

- Mathematics, Computer ScienceElectronic Journal of Probability
- 2022

The branching random walk is considered, a time-homogeneous version of the continuous random energy model, and it is shown that a simple greedy search on a renormalized tree yields a linear-time algorithm which approximately samples from the Gibbs measure, for every β < β c, the (static) critical point.

### Mismatching as a tool to enhance algorithmic performances of Monte Carlo methods for the planted clique model

- Computer ScienceJournal of Statistical Mechanics: Theory and Experiment
- 2021

This paper studies a very simple case of mismatched over-parametrized algorithm applied to one of the most studied inference problem: the planted clique problem, and numerically finds that this over- Parametrization version of the algorithm can reach the supposed algorithmic threshold for the plantedClique problem.

## References

SHOWING 1-10 OF 65 REFERENCES

### Approximate survey propagation for statistical inference

- Computer ScienceJournal of Statistical Mechanics: Theory and Experiment
- 2019

A variant of the AMP algorithm that takes into account glassy nature of the system under consideration is introduced and it is concluded that when there is a model mismatch between the true generative model and the inference model, the performance of AMP rapidly degrades both in terms of MSE and of convergence.

### Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula

- Computer ScienceNIPS
- 2016

It is shown how to rigorously prove the conjectured formula for the symmetric rank-one case, which allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA.

### The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing

- Computer ScienceIEEE Transactions on Information Theory
- 2010

This paper proves that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed Gaussian entries, and provides rigorous foundation to state evolution.

### Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications

- Computer ScienceArXiv
- 2017

A general form of the low-rank approximate message passing (Low-RAMP) algorithm is derived and the derivation of the TAP equations for models as different as the Sherrington–Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses are unify.

### Statistical and computational phase transitions in spiked tensor estimation

- Computer Science2017 IEEE International Symposium on Information Theory (ISIT)
- 2017

The performance of Approximate Message Passing is studied and it is shown that it achieves the MMSE for a large set of parameters, and that factorization is algorithmically “easy” in a much wider region than previously believed.

### Survey propagation: An algorithm for satisfiability

- Computer ScienceRandom Struct. Algorithms
- 2005

A new type of message passing algorithm is introduced which allows to find efficiently a satisfying assignment of the variables in this difficult region of randomly generated formulas.

### Mutual information in rank-one matrix estimation

- Computer Science2016 IEEE Information Theory Workshop (ITW)
- 2016

It is proved that the Bethe mutual information always yields an upper bound to the exact mutual information, using an interpolation method proposed by Guerra and later refined by Korada and Macris, in the case of rank-one symmetric matrix estimation.

### Iterative estimation of constrained rank-one matrices in noise

- Computer Science2012 IEEE International Symposium on Information Theory Proceedings
- 2012

This work considers the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix and proposes a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations.

### Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes

- Computer ScienceProceedings of the National Academy of Sciences
- 2016

It is shown that there are regions of the optimization landscape that are both robust and accessible and that their existence is crucial to achieve good performance on a class of particularly difficult learning problems, and an explanation of this good performance is proposed in terms of a nonequilibrium statistical physics framework.

### Phase transitions in sparse PCA

- Computer Science2015 IEEE International Symposium on Information Theory (ISIT)
- 2015

It is shown that both for low density and for large rank the problem undergoes a series of phase transitions suggesting existence of a region of parameters where estimation is information theoretically possible, but AMP (and presumably every other polynomial algorithm) fails.