Learn More
We show an algorithm for solving symmetric diagonally dominant (SDD) linear systems with <i>m</i> non-zero entries to a relative error of <i>&#949;</i> in <i>O</i>(<i>m</i> log<sup>1/2</sup> <i>n</i> log<sup><i>c</i></sup> <i>n</i> log(1/<i>&#949;</i>)) time. Our approach follows the recursive preconditioning framework, which aims to reduce graphs to trees(More)
We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in(More)
We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does(More)
We develop fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices. The extension we compute is the absolutely minimal Lipschitz extension, and is the limit for large p of p-Laplacian regularization. We present an algorithm that(More)
Given a directed acyclic graph G, and a set of values y on the vertices, the Isotonic Regression of y is a vector x that respects the partial order described by G, and minimizes x − y , for a specified norm. This paper gives improved algorithms for computing the Isotonic Regression for all weighted p-norms with rigorous performance guarantees. Our(More)
We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (<i>n</i><sup>5/3 </sup><i>m</i><sup>1/3</sup>) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due(More)
Given a directed acyclic graph G, and a set of values y on the vertices, the Isotonic Regression of y is a vector x that respects the partial order described by G, and minimizes x − y , for a specified norm. This paper gives improved algorithms for computing the Isotonic Regression for all weighted p-norms with rigorous performance guarantees. Our(More)
We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. When applied to graph Laplacians, this leads to ultra-sparsifiers that in expectation behave as the nearly-optimal ones given by [Kolla-Makarychev-Saberi-Teng STOC'10]. Combining this with the recursive preconditioning(More)
A spectral sparsifier of a graph G is a sparser graph H that approximately preserves the quadratic form of G, i.e. for all vectors x, x T LGx ≈ x T LH x, where LG and LH denote the respective graph Lapla-cians. Spectral sparsifiers generalize cut sparsifiers, and have found many applications in designing graph algorithms. In recent years, there has been(More)
We propose an algorithm for di-mensionality reduction on the simplex, mapping a set of high-dimensional distributions to a space of lower-dimensional distributions, whilst approximately preserving pairwise Hellinger distance between distributions. By introducing a restriction on the input data to distributions that are in some sense quite smooth, we can map(More)