Sublinear time spectral density estimation

  title={Sublinear time spectral density estimation},
  author={Vladimir Braverman and Adit Krishnan and Christopher Musco},
  journal={Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing},
  • V. Braverman, A. Krishnan, C. Musco
  • Published 8 April 2021
  • Computer Science, Mathematics
  • Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing
We present a new sublinear time algorithm for approximating the spectral density (eigenvalue distribution) of an n× n normalized graph adjacency or Laplacian matrix. The algorithm recovers the spectrum up to є accuracy in the Wasserstein-1 distance in O(n· (1/є)) time given sample access to the graph. This result compliments recent work by David Cohen-Steiner, Weihao Kong, Christian Sohler, and Gregory Valiant (2018), which obtains a solution with runtime independent of n, but exponential in 1… 

Figures from this paper


Network Density of States
Beyond providing visually compelling fingerprints of graphs, this paper shows how the estimation of spectral densities facilitates the computation of many common centrality measures, and uses spectral densITIES to estimate meaningful information about graph structure that cannot be inferred from the extremal eigenpairs alone.
The kernel polynomial method
Efficient and stable algorithms for the calculation of spectral quantities and correlation functions are some of the key tools in computational condensed matter physics. In this article we review
Analysis of stochastic Lanczos quadrature for spectrum approximation
An error analysis for stochastic Lanczos quadrature (SLQ) is presented and it is shown that SLQ obtains an approximation to the CESM within a Wasserstein distance of t |λmax [A] − λmin[A]| with probability at least 1 − η.
Linear and Sublinear Time Spectral Density Estimation
An O(n/ poly( )) time algorithm for computing the spectral density of any n×n normalized graph adjacency or Laplacian matrix, which is sublinear in the size of the matrix, and assumes sample access to the graph.
Dynamic Trace Estimation
A practical algorithm for solving the implicit trace estimation problem is presented and it is proved that, in a natural setting, its complexity is quadratically better than the standard solution of repeatedly applying Hutchinson’s stochastic trace estimator.
Is Gauss Quadrature Better than Clenshaw–Curtis? | SIAM Review | Vol. 50, No. 1 | Society for Industrial and Applied Mathematics
Comparisons of the convergence behavior of Gauss quadrature with that of its younger brother, Clenshaw–Curtis are compared, and experiments show that the supposed factor-of-2 advantage of Gaussian quadratures is rarely realized.
Hutch++: Optimal Stochastic Trace Estimation
A new randomized algorithm, Hutch++, is introduced, which computes a (1 ± ε) approximation to tr( A ) for any positive semidefinite (PSD) A using just O(1/ε) matrix-vector products, which improves on the ubiquitous Hutchinson's estimator.
Pseudospectral Shattering, the Sign Function, and Diagonalization in Nearly Matrix Multiplication Time
This work shows that adding a small complex Gaussian perturbation to any matrix splits its pseudospectrum into small well-separated components, and is the first algorithm to achieve nearly matrix multiplication time for diagonalization in any model of computation (real arithmetic, rational arithmetic, or finite arithmetic).
PyHessian: Neural Networks Through the Lens of the Hessian
PYHESSIAN, a new scalable framework that enables fast computation of Hessian (i.e., second-order derivative) information for deep neural networks, shows new finer-scale insights, demonstrating that while conventional wisdom is sometimes validated, in other cases it is simply incorrect.
The gradient complexity of linear regression
It is shown that for polynomial accuracy, $\Theta(d)$ calls to the oracle are necessary and sufficient even for a randomized algorithm, and based on a reduction to estimating the least eigenvalue of a random Wishart matrix.