Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates

@article{Zhu2015SpectralSA,
  title={Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates},
  author={Zeyuan Allen Zhu and Zhenyu A. Liao and Lorenzo Orecchia},
  journal={Proceedings of the forty-seventh annual ACM symposium on Theory of Computing},
  year={2015}
}
In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [11]. While previous constructions required Ω(n4) running time [11, 45], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an… 

Tables from this paper

Dynamic Streaming Spectral Sparsification in Nearly Linear Time and Space

TLDR
This work provides both the first efficient $\ell_2$-sparse recovery algorithm for graphs and new primitives for manipulating the effective resistance embedding of a graph, both of which it is hoped have further applications.

Speeding Up Sparsification using Inner Product Search Data Structures

TLDR
The heart of the work is the design of a variety of different inner product search data structures that have efficient initialization, query and update time, compatible to dimensionality reduction and robust against adaptive adversary.

An SDP-based algorithm for linear-sized spectral sparsification

TLDR
An algorithm is presented that outputs a (1+ε)-spectral sparsifier of G with O(n/ε2) edges in Ο(m/εO(1)) time, based on a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references.

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

  • Y. LeeHe Sun
  • Computer Science, Mathematics
    2015 IEEE 56th Annual Symposium on Foundations of Computer Science
  • 2015
TLDR
This work presents the first almost-linear time algorithm for constructing linear-sized spectral sparsification for graphs, using a novel combination of two techniques used in literature for constructing spectralSparsification: Random sampling by effective resistance and adaptive constructions based on barrier functions.

Flows in almost linear time via adaptive preconditioning

TLDR
This work gives an alternate approach for approximating undirected max-flow, and the first almost-linear time approximations of discretizations of total variation minimization objectives.

Online convex optimization: algorithms, learning, and duality

TLDR
Taking a bird’s-eyes view of the connections shown throughout the text, forming a “genealogy” of OCO algorithms is formed, and some possible path for future research is discussed.

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

We present an almost-linear time algorithm for constructing a spectral sparsifier with the number of edges linear in its number of vertices. This improves all previous constructions of linear-sized

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

We present an almost-linear time algorithm for constructing a spectral sparsifier with the number of edges linear in its number of vertices. This improves all previous constructions of linear-sized

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

We present an almost-linear time algorithm for constructing a spectral sparsifier with the number of edges linear in its number of vertices. This improves all previous constructions of linear-sized

Constructing Linear-Sized Spectral Sparsification in Almost-Linear Time

We present an almost-linear time algorithm for constructing a spectral sparsifier with the number of edges linear in its number of vertices. This improves all previous constructions of linear-sized
...

References

SHOWING 1-10 OF 51 REFERENCES

Sparse Sums of Positive Semidefinite Matrices

TLDR
This article considers a more general task of approximating sums of symmetric, positive semidefinite matrices of arbitrary rank and presents two deterministic, polynomial time algorithms for solving this problem.

A Matrix Hyperbolic Cosine Algorithm and Applications

TLDR
This paper generalizes Spencer's hyperbolic cosine algorithm to the matrix-valued setting, and gives an elementary connection between spectral sparsification of positive semi-definite matrices and element-wise matrixSparsification, which implies an improved deterministic algorithm for spectral graph sparsify of dense graphs.

Twice-ramanujan sparsifiers

TLDR
It is proved that every graph has a spectral sparsifier with a number of edges linear in its number of vertices, and an elementary deterministic polynomial time algorithm is given for constructing H, which approximates G spectrally at least as well as a Ramanujan expander with dn/2 edges approximates the complete graph.

Graph sparsification by effective resistances

TLDR
A key ingredient in the algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which the authors can query the approximate effective resistance between any two vertices in a graph in O(log n) time.

Near-Optimal Algorithms for Online Matrix Prediction

TLDR
This paper isolates a property of matrices, which is called $(\beta,\tau)$-decomposability, and derives an efficient online learning algorithm that enjoys a regret bound of $\tilde{O}(\sqrt{\beta\, \tau\,T})$ for all problems in which the comparison class is composed of $(\ beta,\Tau) $- decomposable matrices.

Vertex Sparsifiers and Abstract Rounding Algorithms

TLDR
It is shown that any rounding algorithm which also works for the $0$-extension relaxation can be used to construct good vertex-sparsifiers for which the optimization problem is easy, and that for many natural optimization problems, the integrality gap of the linear program is always at most $O(\log k)$ times the integralality gap restricted to trees.

An Efficient Algorithm for Unweighted Spectral Graph Sparsification

TLDR
The algorithm can efficiently compute unweighted graph sparsifiers for weighted graphs, leading to sparsified graphs that retain the weights of the original graphs.

Breaking the Multicommodity Flow Barrier for O(vlog n)-Approximations to Sparsest Cut

  • Jonah Sherman
  • Computer Science
    2009 50th Annual IEEE Symposium on Foundations of Computer Science
  • 2009
TLDR
The core of the algorithm is a stronger, algorithmic version of Arora et al.'s structure theorem, where it is shown that matching-chaining argument at the heart of their proof can be viewed as an algorithm that finds good augmenting paths in certain geometric multicommodity flow networks.

Beating the adaptive bandit with high probability

We provide a principled way of proving Õ(√T) high-probability guarantees for partial-information (bandit) problems over arbitrary convex decision sets. First, we prove a regret guarantee for the

Near-optimal no-regret algorithms for zero-sum games

...