MMSE of probabilistic low-rank matrix estimation: Universality with respect to the output channel
@article{Lesieur2015MMSEOP, title={MMSE of probabilistic low-rank matrix estimation: Universality with respect to the output channel}, author={Thibault Lesieur and Florent Krzakala and Lenka Zdeborov{\'a}}, journal={2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton)}, year={2015}, pages={680-687} }
This paper considers probabilistic estimation of a low-rank matrix from non-linear element-wise measurements of its elements. We derive the corresponding approximate message passing (AMP) algorithm and its state evolution. Relying on non-rigorous but standard assumptions motivated by statistical physics, we characterize the minimum mean squared error (MMSE) achievable information theoretically and with the AMP algorithm. Unlike in related problems of linear estimation, in the present setting…
78 Citations
Mismatched Estimation of rank-one symmetric matrices under Gaussian noise
- Mathematics, Computer ScienceArXiv
- 2021
The full exact analytic expression of the asymptotic mean squared error (MSE) is derived in the large system size limit for the particular case of Gaussian priors and additive noise.
Mutual information in rank-one matrix estimation
- Computer Science2016 IEEE Information Theory Workshop (ITW)
- 2016
It is proved that the Bethe mutual information always yields an upper bound to the exact mutual information, using an interpolation method proposed by Guerra and later refined by Korada and Macris, in the case of rank-one symmetric matrix estimation.
Fundamental limits of symmetric low-rank matrix estimation
- Computer ScienceCOLT
- 2017
This paper considers the high-dimensional inference problem where the signal is a low-rank symmetric matrix which is corrupted by an additive Gaussian noise and compute the limit in the large dimension setting for the mutual information between the signal and the observations, while the rank of the signal remains constant.
Rank-one matrix estimation: analysis of algorithmic and information theoretic limits by the spatial coupling method
- Computer ScienceArXiv
- 2018
The spatial coupling methodology developed in the framework of error correcting codes is used, to rigorously derive the mutual information for the symmetric rank-one case and shows that the computational gap vanishes for the proposed spatially coupled model, a promising feature with many possible applications.
Fundamental limits of low-rank matrix estimation: the non-symmetric case
- Computer Science
- 2017
This work considers the high-dimensional inference problem where the signal is a low-rank matrix which is corrupted by an additive Gaussian noise and compute the limit in the large dimension setting for the mutual information between the signal and the observations, as well as the matrix minimum mean square error.
Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula
- Computer ScienceNIPS
- 2016
It is shown how to rigorously prove the conjectured formula for the symmetric rank-one case, which allows to express the minimal mean-square-error and to characterize the detectability phase transitions in a large set of estimation problems ranging from community detection to sparse PCA.
Phase transitions in spiked matrix estimation: information-theoretic analysis
- Computer ScienceArXiv
- 2018
The minimal mean squared error is computed for the estimation of the low-rank signal and it is compared to the performance of spectral estimators and message passing algorithms.
Phase Transitions and Sample Complexity in Bayes-Optimal Matrix Factorization
- Computer ScienceIEEE Transactions on Information Theory
- 2016
This work compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm based on the asymptotic state-evolution analysis of the algorithm.
Constrained Low-rank Matrix Estimation: Phase Transitions, Approximate Message Passing and Applications
- Computer ScienceArXiv
- 2017
The derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses are unify.
Information-Theoretic Bounds and Phase Transitions in Clustering, Sparse PCA, and Submatrix Localization
- Computer ScienceIEEE Transactions on Information Theory
- 2018
The upper bounds show that for each of these problems there is a significant regime where reliable detection is information-theoretically possible but where known algorithms such as PCA fail completely, since the spectrum of the observed matrix is uninformative.
References
SHOWING 1-10 OF 32 REFERENCES
Iterative estimation of constrained rank-one matrices in noise
- Computer Science2012 IEEE International Symposium on Information Theory Proceedings
- 2012
This work considers the problem of estimating a rank-one matrix in Gaussian noise under a probabilistic model for the left and right factors of the matrix and proposes a simple iterative procedure that reduces the problem to a sequence of scalar estimation computations.
Phase transitions in sparse PCA
- Computer Science2015 IEEE International Symposium on Information Theory (ISIT)
- 2015
It is shown that both for low density and for large rank the problem undergoes a series of phase transitions suggesting existence of a region of parameters where estimation is information theoretically possible, but AMP (and presumably every other polynomial algorithm) fails.
Generalized approximate message passing for estimation with random linear mixing
- Computer Science2011 IEEE International Symposium on Information Theory Proceedings
- 2011
G-AMP incorporates general measurement channels and shows that the asymptotic behavior of the G-AMP algorithm under large i.i.d. measurement channels is similar to the AWGN output channel case, and Gaussian transform matrices is described by a simple set of state evolution (SE) equations.
Phase Transitions and Sample Complexity in Bayes-Optimal Matrix Factorization
- Computer ScienceIEEE Transactions on Information Theory
- 2016
This work compute the minimal mean-squared-error achievable, in principle, in any computational time, and the error that can be achieved by an efficient approximate message passing algorithm based on the asymptotic state-evolution analysis of the algorithm.
Information-theoretically optimal sparse PCA
- Computer Science2014 IEEE International Symposium on Information Theory
- 2014
This work analyzes an Approximate Message Passing algorithm to estimate the underlying signal and shows, in the high dimensional limit, that the AMP estimates are information-theoretically optimal and effectively provides a single-letter characterization of the sparse PCA problem.
Adaptive damping and mean removal for the generalized approximate message passing algorithm
- Computer Science2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2015
Numerical results demonstrate significantly enhanced robustness to non-zero-mean, rank-deficient, column-correlated, and ill-conditioned A and propose adaptive-damping and mean-removal strategies that aim to prevent divergence.
Low-rank matrix reconstruction and clustering via approximate message passing
- Computer ScienceNIPS
- 2013
This work proposes an efficient approximate message passing algorithm, derived from the belief propagation algorithm, to perform the Bayesian inference for matrix reconstruction and successfully applied the proposed algorithm to a clustering problem, by reformulating it as a low-rank matrix reconstruction problem with an additional structural property.
The Dynamics of Message Passing on Dense Graphs, with Applications to Compressed Sensing
- Computer ScienceIEEE Transactions on Information Theory
- 2010
This paper proves that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed Gaussian entries, and provides rigorous foundation to state evolution.
Computational Barriers in Minimax Submatrix Detection
- Computer ScienceArXiv
- 2013
The minimax detection of a small submatrix of elevated mean in a large matrix contaminated by additive Gaussian noise is studied and it is shown that the hardness of attaining the minimax estimation rate can crucially depend on the loss function.
Message-passing algorithms for compressed sensing
- Computer ScienceProceedings of the National Academy of Sciences
- 2009
A simple costless modification to iterative thresholding is introduced making the sparsity–undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures, inspired by belief propagation in graphical models.