Principal Component Projection with Low-Degree Polynomials

@article{Farnham2020PrincipalCP,
  title={Principal Component Projection with Low-Degree Polynomials},
  author={Stephen D. Farnham and Lixin Shen and Bruce W. Suter},
  journal={Journal of Scientific Computing},
  year={2020},
  volume={85},
  pages={1-25}
}
In this paper, we consider approximations of principal component projection (PCP) without explicitly computing principal components. This problem has been studied in several recent works. The main feature of existing approaches is viewing the PCP matrix as a matrix function. This underlying function is the composition of a step function with a rational function. To find an approximate PCP, the step function is approximated by a polynomial while the rational function is evaluated by a fast ridge… 

References

SHOWING 1-10 OF 10 REFERENCES
Principal Component Projection Without Principal Component Analysis
TLDR
An iterative algorithm is introduced that provably computes the projection of a vector onto the top principal components of a matrix using few calls to any black-box routine for ridge regression, giving the first major runtime improvement over the naive method of combining PCA with regression.
Faster Principal Component Regression and Stable Matrix Chebyshev Approximation
TLDR
This work solves principal component regression (PCR), up to a multiplicative accuracy $1+\gamma$, by reducing the problem to $\tilde{O}(\gamma^{-1})$ black-box calls of ridge regression, and obtaining a general stable recurrence formula for matrix Chebyshev polynomials and a degree-optimal polynomial approximation to the matrix sign function.
Backward Stability of Iterations for Computing the Polar Decomposition
TLDR
This work shows that a general iteration of X_{k+1} = f(X_k) for computing the unitary polar factor is backward stable under two conditions and proves the backward stability of the scaled Newton iteration under the assumption of mixed backward--forward stable manner.
Stability of the Lanczos Method for Matrix Function Approximation
TLDR
This paper proves that finite precision Lanczos essentially matches the exact arithmetic guarantee if computations use roughly $\log(nC\|A\|)$ bits of precision, and raises the question of if convergence in less than $poly(\kappa)$ iterations can be expected in finite precision, even for matrices with clustered, skewed, or otherwise favorable eigenvalue distributions.
Accelerating Stochastic Gradient Descent using Predictive Variance Reduction
TLDR
It is proved that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG), but the analysis is significantly simpler and more intuitive.
Stochastic dual coordinate ascent methods for regularized loss
TLDR
A new analysis of Stochastic Dual Coordinate Ascent (SDCA) is presented showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD.
PARTIAL DIFFERENTIAL EQUATIONS
Introduction Part I: Representation formulas for solutions: Four important linear partial differential equations Nonlinear first-order PDE Other ways to represent solutions Part II: Theory for linear
A note on the summation of Chebyshev series
Partial Differential Equations, American Mathematical Society, Providence, Rhode Island, 19985
  • 1998