Eigenvectors from Eigenvalues Sparse Principal Component Analysis (EESPCA)

  title={Eigenvectors from Eigenvalues Sparse Principal Component Analysis (EESPCA)},
  author={Hildreth Robert Frost},
  journal={Journal of Computational and Graphical Statistics},
  • H. R. Frost
  • Published 2 June 2020
  • Computer Science
  • Journal of Computational and Graphical Statistics
We present a novel technique for sparse principal component analysis. This method, named Eigenvectors from Eigenvalues Sparse Principal Component Analysis (EESPCA), is based on the recently detailed formula for computing normed, squared eigenvector loadings of a Hermitian matrix from the eigenvalues of the full matrix and associated sub-matrices. Relative to the state-of-the-art LASSO-based sparse PCA method of Witten, Tibshirani and Hastie, the EESPCA technique offers a two-orders-of-magnitude… 

Figures from this paper

Technology-mediated teaching and learning process: A conceptual study of educators’ response amidst the Covid-19 pandemic
A hybrid educational model (HyFlex+ Tec) used to enable virtual and in-person education in the HEIs is defined and it is noted that the hybrid learning model supports continuity of education/learning for teachers and students during the Covid-19 pandemic.
Integrated protein and transcriptome high-throughput spatial profiling
Spatial PrOtein and Transcriptome Sequencing (SPOTS) for high-throughput integration of transcriptome and protein profiling within the spatial context revealed that spatially-resolved multi-omic integration provides a comprehensive perspective on key biological processes in health and disease.


Sparse Principal Component Analysis
This work introduces a new method called sparse principal component analysis (SPCA) using the lasso (elastic net) to produce modified principal components with sparse loadings and shows that PCA can be formulated as a regression-type optimization problem.
On Consistency and Sparsity for Principal Components Analysis in High Dimensions
  • I. Johnstone, A. Lu
  • Computer Science, Mathematics
    Journal of the American Statistical Association
  • 2009
A simple algorithm for selecting a subset of coordinates with largest sample variances is provided, and it is shown that if PCA is done on the selected subset, then consistency is recovered, even if p(n) ≫ n.
A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis.
A penalized matrix decomposition (PMD), a new framework for computing a rank-K approximation for a matrix, and establishes connections between the SCoTLASS method for sparse principal component analysis and the method of Zou and others (2006).
A majorization-minimization approach to the sparse generalized eigenvalue problem
The proposed sparse GEV algorithm, which offers a general framework to solve any sparse G EV problem, will give rise to competitive algorithms for a variety of applications where specific instances of GEV problems arise.
A Direct Formulation for Sparse PCA Using Semidefinite Programming
A modification of the classical variational representation of the largest eigenvalue of a symmetric matrix is used, where cardinality is constrained, and a semidefinite programming-based relaxation is derived for the sparse PCA problem.
Spectral Bounds for Sparse PCA: Exact and Greedy Algorithms
This work considers an alternative discrete spectral formulation based on variational eigenvalue bounds and provides an effective greedy strategy as well as provably optimal solutions using branch-and-bound search and reveals a simple renormalization step that improves approximate solutions obtained by any continuous method.
A Modified Principal Component Technique Based on the LASSO
In many multivariate statistical techniques, a set of linear functions of the original p variables is produced. One of the more difficult aspects of these techniques is the interpretation of the
Simple principal components
  • S. Vines
  • Mathematics, Computer Science
  • 2000
An algorithm for producing simple approximate principal components directly from a variance–covariance matrix using a series of ‘simplicity preserving’ linear transformations that can always be represented by integers.
Principal component analysis based methods in bioinformatics studies
The goal of this article is to make bioinformatics researchers aware of the PCA technique and more importantly its most recent development, so that this simple yet effective dimension reduction technique can be better employed in bio informatics data analysis.