Prediction Errors for Penalized Regressions based on Generalized Approximate Message Passing

  title={Prediction Errors for Penalized Regressions based on Generalized Approximate Message Passing},
  author={Ayaka Sakata},
  • A. Sakata
  • Published 26 June 2022
  • Computer Science
  • ArXiv
We discuss the prediction accuracy of assumed statistical models in terms of prediction errors for the generalized linear model and penalized maximum likelihood methods. We derive the forms of estimators for the prediction errors: Cp criterion, information criteria, and leave-one-out cross validation (LOOCV) error, using the generalized approximate message passing (GAMP) algorithm and replica method. These estimators coincide with each other when the number of model parameters is sufficiently… 



Statistical Learning with Sparsity: The Lasso and Generalizations

Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.

Asymptotic Errors for High-Dimensional Convex Penalized Linear Regression beyond Gaussian Matrices.

A rigorous derivation of an explicit formula for the asymptotic mean squared error obtained by penalized convex regression estimators such as the LASSO or the elastic net is provided, for a class of very generic random matrices corresponding to rotationally invariant data matrices with arbitrary spectrum.

Generalized approximate message passing for estimation with random linear mixing

  • S. Rangan
  • Computer Science
    2011 IEEE International Symposium on Information Theory Proceedings
  • 2011
G-AMP incorporates general measurement channels and shows that the asymptotic behavior of the G-AMP algorithm under large i.i.d. measurement channels is similar to the AWGN output channel case, and Gaussian transform matrices is described by a simple set of state evolution (SE) equations.

On the interplay between noise and curvature and its effect on optimization and generalization

This work clarifies the distinction between the Fisher matrix, the Hessian, and the covariance matrix of the gradients and explains how both curvature and noise are relevant to properly estimate the generalization gap.

Information Theory and an Extension of the Maximum Likelihood Principle

The classical maximum likelihood principle can be considered to be a method of asymptotic realization of an optimum estimate with respect to a very general information theoretic criterion to provide answers to many practical problems of statistical model fitting.

Probabilistic reasoning in intelligent systems - networks of plausible inference

  • J. Pearl
  • Computer Science
    Morgan Kaufmann series in representation and reasoning
  • 1989
The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic.

Algebraic Geometry and Statistical Learning Theory

The theory achieved here underpins accurate estimation techniques in the presence of singularities and lays the foundations for the use of algebraic geometry in statistical learning theory.

Statistical physics of spin glasses and information processing : an introduction

1. Mean-field theory of phase transitions 2. Mean-field theory of spin glasses 3. Replica symmetry breaking 4. Gauge theory of spin glasses 5. Error-correcting codes 6. Image restoration 7.

The Elements of Statistical Learning (Springer Science & Business Media

  • 2009