# Preconditioning in Expectation

@article{Cohen2014PreconditioningIE, title={Preconditioning in Expectation}, author={Michael B. Cohen and Rasmus Kyng and Jakub W. Pachocki and Richard Peng and Anup B. Rao}, journal={ArXiv}, year={2014}, volume={abs/1401.6236} }

We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. When applied to graph Laplacians, this leads to ultra-sparsifiers that in expectation behave as the nearly-optimal ones given by [Kolla-Makarychev-Saberi-Teng STOC`10]. Combining this with the recursive preconditioning framework by [Spielman-Teng STOC`04] and improved embedding algorithms, this leads to algorithms that solve symmetric diagonally dominant…

## 8 Citations

Solving SDD linear systems in nearly mlog1/2n time

- Computer ScienceSTOC
- 2014

This work shows that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods, and obtains a two-pass approach algorithm for constructing optimal embeddings in snowflake spaces that runs in O(m log log n) time.

Sparse Matrix Factorizations for Fast Linear Solvers with Application to Laplacian Systems

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2017

This paper shows how to perform cheap iterations along non-sparse search directions, provided that these directions can be extracted from a new kind of sparse factorization, and deduces as a special case a nearly-linear time algorithm to approximate the minimal norm solution of a linear system.

Faster Least Squares Optimization

- Computer Science, MathematicsArXiv
- 2019

This work investigates randomized methods for solving overdetermined linear least-squares problems, where the Hessian is approximated based on a random projection of the data matrix, and shows that a fixed subspace embedding with momentum yields the fastest rate of convergence, along with the lowest computational complexity.

Efficient Õ(n/∊) Spectral Sketches for the Laplacian and its Pseudoinverse

- Mathematics, Computer ScienceSODA
- 2018

This paper provides nearly-linear time algorithms that, when given a Laplacian matrix [EQUATION] ∈ Rn×n and an error tolerance ϵ, produce O(n/ϵ)-size sketches of both [EquATION] and its pseudoinverse.

Efficient $\widetilde{O}(n/\epsilon)$ Spectral Sketches for the Laplacian and its Pseudoinverse

- Computer Science, Mathematics
- 2017

These algorithms provide nearly-linear time algorithms that, when given a Laplacian matrix $\mathcal{L} \in \mathbb{R}^{n \times n}$ and an error tolerance $\epsilon$, produce $\tilde{O}(n/\ep silon)$-size sketches of both the LaPLacian and its pseudoinverse.

A Spectral Approach to the Shortest Path Problem

- MathematicsLinear Algebra and its Applications
- 2021

Understanding the Computational Power of Spiking Neural Network

- Computer Science
- 2017

Spiking neural network (SNN) is focused on without any artificial construction and it is found out that SNN is actually implicitly solving a much more sophisticated problem, the `1 minimization problem.

## References

SHOWING 1-10 OF 36 REFERENCES

Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems

- Computer ScienceSTOC '04
- 2004

We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy ε in time linear in their number of non-zeros and log (κf (A) ε), where κf (A) is the condition number of…

A simple, combinatorial algorithm for solving SDD systems in nearly-linear time

- Computer ScienceSTOC '13
- 2013

A simple combinatorial algorithm that solves symmetric diagonally dominant (SDD) linear systems in nearly-linear time and has the fastest known running time under the standard unit-cost RAM model.

Spectral Sparsification of Graphs

- Computer Science, MathematicsSIAM J. Comput.
- 2011

It is proved that every graph has a spectral sparsifier of nearly linear size, and an algorithm is presented that produces spectralSparsifiers in time $O(m\log^{c}m)$, where $m$ is the number of edges in the original graph and $c$ is some absolute constant.

Subgraph sparsification and nearly optimal ultrasparsifiers

- Mathematics, Computer ScienceSTOC '10
- 2010

It is shown that for each positive integer k, every n-vertex weighted graph has an (n-1+k)-edge spectral sparsifier with relative condition number at most n/k log n, ~O(log log n) where ~O() hides lower order terms.

Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2014

A randomized algorithm is presented that on input a symmetric, weakly diagonally dominant matrix A with nonpositive off-diagonal entries and an n-vector produces an x such that $\tilde{x} - A^{\dagger} {b} \leq \epsilon$ in expected time.

Spectral sparsification of graphs: theory and algorithms

- Computer ScienceCACM
- 2013

It is explained what it means for one graph to be a spectral approximation of another and the development of algorithms for spectral sparsification are reviewed, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.

Algorithm Design Using Spectral Graph Theory

- Computer Science, Mathematics
- 2013

This thesis develops highly efficient and parallelizable algorithms for solving linear systems involving graph Laplacian matrices and gives two solvers that take diametrically opposite approaches, the first highly efficient solver for Laplachian linear systems that parallelizes almost completely.

Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems

- Computer Science2013 IEEE 54th Annual Symposium on Foundations of Computer Science
- 2013

This paper shows how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box, and improves the convergence guarantees for Kaczmarz methods.

Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices

- Computer Science, Mathematics
- 2011

This talk surveys recent progress on the design of provably fast algorithms for solving linear equations in the Laplacian matrices of graphs.

Paved with Good Intentions: Analysis of a Randomized Block Kaczmarz Method

- MathematicsArXiv
- 2012