• Corpus ID: 17165019

Preconditioning in Expectation

@article{Cohen2014PreconditioningIE,
  title={Preconditioning in Expectation},
  author={Michael B. Cohen and Rasmus Kyng and Jakub W. Pachocki and Richard Peng and Anup B. Rao},
  journal={ArXiv},
  year={2014},
  volume={abs/1401.6236}
}
We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. When applied to graph Laplacians, this leads to ultra-sparsifiers that in expectation behave as the nearly-optimal ones given by [Kolla-Makarychev-Saberi-Teng STOC`10]. Combining this with the recursive preconditioning framework by [Spielman-Teng STOC`04] and improved embedding algorithms, this leads to algorithms that solve symmetric diagonally dominant… 

Figures from this paper

Solving SDD linear systems in nearly mlog1/2n time
TLDR
This work shows that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods, and obtains a two-pass approach algorithm for constructing optimal embeddings in snowflake spaces that runs in O(m log log n) time.
Sparse Matrix Factorizations for Fast Linear Solvers with Application to Laplacian Systems
TLDR
This paper shows how to perform cheap iterations along non-sparse search directions, provided that these directions can be extracted from a new kind of sparse factorization, and deduces as a special case a nearly-linear time algorithm to approximate the minimal norm solution of a linear system.
Faster Least Squares Optimization
TLDR
This work investigates randomized methods for solving overdetermined linear least-squares problems, where the Hessian is approximated based on a random projection of the data matrix, and shows that a fixed subspace embedding with momentum yields the fastest rate of convergence, along with the lowest computational complexity.
Efficient Õ(n/∊) Spectral Sketches for the Laplacian and its Pseudoinverse
TLDR
This paper provides nearly-linear time algorithms that, when given a Laplacian matrix [EQUATION] ∈ Rn×n and an error tolerance ϵ, produce O(n/ϵ)-size sketches of both [EquATION] and its pseudoinverse.
Efficient $\widetilde{O}(n/\epsilon)$ Spectral Sketches for the Laplacian and its Pseudoinverse
TLDR
These algorithms provide nearly-linear time algorithms that, when given a Laplacian matrix $\mathcal{L} \in \mathbb{R}^{n \times n}$ and an error tolerance $\epsilon$, produce $\tilde{O}(n/\ep silon)$-size sketches of both the LaPLacian and its pseudoinverse.
A Spectral Approach to the Shortest Path Problem
Understanding the Computational Power of Spiking Neural Network
TLDR
Spiking neural network (SNN) is focused on without any artificial construction and it is found out that SNN is actually implicitly solving a much more sophisticated problem, the `1 minimization problem.
Essays on Banking Competition
Essays on Banking Competition

References

SHOWING 1-10 OF 36 REFERENCES
Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems
We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy ε in time linear in their number of non-zeros and log (κf (A) ε), where κf (A) is the condition number of
A simple, combinatorial algorithm for solving SDD systems in nearly-linear time
TLDR
A simple combinatorial algorithm that solves symmetric diagonally dominant (SDD) linear systems in nearly-linear time and has the fastest known running time under the standard unit-cost RAM model.
Spectral Sparsification of Graphs
TLDR
It is proved that every graph has a spectral sparsifier of nearly linear size, and an algorithm is presented that produces spectralSparsifiers in time $O(m\log^{c}m)$, where $m$ is the number of edges in the original graph and $c$ is some absolute constant.
Subgraph sparsification and nearly optimal ultrasparsifiers
TLDR
It is shown that for each positive integer k, every n-vertex weighted graph has an (n-1+k)-edge spectral sparsifier with relative condition number at most n/k log n, ~O(log log n) where ~O() hides lower order terms.
Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems
TLDR
A randomized algorithm is presented that on input a symmetric, weakly diagonally dominant matrix A with nonpositive off-diagonal entries and an n-vector produces an x such that $\tilde{x} - A^{\dagger} {b} \leq \epsilon$ in expected time.
Spectral sparsification of graphs: theory and algorithms
TLDR
It is explained what it means for one graph to be a spectral approximation of another and the development of algorithms for spectral sparsification are reviewed, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.
Algorithm Design Using Spectral Graph Theory
TLDR
This thesis develops highly efficient and parallelizable algorithms for solving linear systems involving graph Laplacian matrices and gives two solvers that take diametrically opposite approaches, the first highly efficient solver for Laplachian linear systems that parallelizes almost completely.
Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
  • Y. Lee, Aaron Sidford
  • Computer Science
    2013 IEEE 54th Annual Symposium on Foundations of Computer Science
  • 2013
TLDR
This paper shows how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box, and improves the convergence guarantees for Kaczmarz methods.
Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices
TLDR
This talk surveys recent progress on the design of provably fast algorithms for solving linear equations in the Laplacian matrices of graphs.
...
...