#### Filter Results:

- Full text PDF available (16)

#### Publication Year

2010

2017

- This year (4)
- Last 5 years (14)
- Last 10 years (16)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Michael B. Cohen, Rasmus Kyng, +4 authors Shen Chen Xu
- STOC
- 2014

We show an algorithm for solving symmetric diagonally dominant (SDD) linear systems with <i>m</i> non-zero entries to a relative error of <i>ε</i> in <i>O</i>(<i>m</i> log<sup>1/2</sup> <i>n</i> log<sup><i>c</i></sup> <i>n</i> log(1/<i>ε</i>)) time. Our approach follows the recursive preconditioning framework, which aims to reduce graphs to trees… (More)

- Rasmus Kyng, Anup Rao, Sushant Sachdeva, Daniel A. Spielman
- COLT
- 2015

We develop fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices. The extension we compute is the absolutely minimal Lipschitz extension, and is the limit for large p of p-Laplacian regularization. We present an algorithm that… (More)

We introduce the sparsified Cholesky and sparsified multigrid algorithms for solving systems of linear equations. These algorithms accelerate Gaussian elimination by sparsifying the nonzero matrix entries created by the elimination process. We use these new algorithms to derive the first nearly linear time algorithms for solving systems of equations in… (More)

- Rasmus Kyng, Sushant Sachdeva
- 2016 IEEE 57th Annual Symposium on Foundations of…
- 2016

We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by the product of a sparse lower triangular matrix with its transpose. This gives the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does… (More)

- David Durfee, Rasmus Kyng, John Peebles, Anup B. Rao, Sushant Sachdeva
- STOC
- 2017

We present an algorithm that, with high probability, generates a random spanning tree from an edge-weighted undirected graph in (<i>n</i><sup>5/3 </sup><i>m</i><sup>1/3</sup>) time. The tree is sampled from a distribution where the probability of each tree is proportional to the product of its edge weights. This improves upon the previous best algorithm due… (More)

Given a directed acyclic graph G, and a set of values y on the vertices, the Isotonic Regression of y is a vector<lb>x that respects the partial order described by G, and minimizes ‖x− y‖ , for a specified norm. This paper gives<lb>improved algorithms for computing the Isotonic Regression for all weighted lp-norms with rigorous performance<lb>guarantees.… (More)

- Michael B. Cohen, Rasmus Kyng, Jakub W. Pachocki, Richard Peng, Anup B. Rao
- ArXiv
- 2014

We show that preconditioners constructed by random sampling can perform well without meeting the standard requirements of iterative methods. When applied to graph Laplacians, this leads to ultrasparsifiers that in expectation behave as the nearly-optimal ones given by [Kolla-Makarychev-Saberi-Teng STOC‘10]. Combining this with the recursive preconditioning… (More)

- Rasmus Kyng, Jakub W. Pachocki, Richard Peng, Sushant Sachdeva
- SODA
- 2017

A spectral sparsifier of a graph G is a sparser graph H that approximately preserves the quadratic form of G, i.e. for all vectors x, xLGx ≈ x T LHx, where LG and LH denote the respective graph Laplacians. Spectral sparsifiers generalize cut sparsifiers, and have found many applications in designing graph algorithms. In recent years, there has been interest… (More)

We propose an algorithm for dimensionality reduction on the simplex, mapping a set of high-dimensional distributions to a space of lower-dimensional distributions, whilst approximately preserving pairwise Hellinger distance between distributions. By introducing a restriction on the input data to distributions that are in some sense quite smooth, we can map… (More)

- Rasmus Kyng, Anup Rao, Sushant Sachdeva
- NIPS
- 2015

Given a directed acyclic graphG, and a set of values y on the vertices, the Isotonic Regression of y is a vector x that respects the partial order described by G, and minimizes ‖x− y‖ , for a specified norm. This paper gives improved algorithms for computing the Isotonic Regression for all weighted `p-norms with rigorous performance guarantees. Our… (More)