Learn More
We present a hybrid algorithm for optimizing a convex, smooth function over the cone of positive semidefinite matrices. Our algorithm converges to the global optimal solution and can be used to solve general largescale semidefinite programs and hence can be readily applied to a variety of machine learning problems. We show experimental results on three(More)
We consider an abstract class of optimization problems that are parameterized concavely in a single parameter, and show that the solution path along the parameter can always be approximated with accuracy ε > 0 by a set of sizeO(1/ √ ε). A lower bound of size Ω(1/ √ ε) shows that the upper bound is tight up to a constant factor. We also devise an algorithm(More)
We consider parameterized convex optimization problems over the unit simplex, that depend on one parameter. We provide a simple and efficient scheme for maintaining an ε-approximate solution (and a corresponding ε-coreset) along the entire parameter path. We prove correctness and optimality of the method. Practically relevant instances of the(More)
We devise a simple algorithm for computing an approximate solution path for parameterized semidefinite convex optimization problems that is guaranteed to be ε-close to the exact solution path. As a consequence, we can compute the entire regularization path for many regularized matrix completion and factorization approaches, as well as nuclear norm or(More)
In this paper, we present approximation algorithms for a variety of problems occurring in the design of energy-efficient wireless communication networks. We first study the k-station network problem, where for a set S of stations and some constant k, one wants to assign transmission powers to at most k senders such that every station in S can receive a(More)
Hyperbolic geo metry appears to be intrinsic in many large real networks. We construct and implement a new maximum likelihood estimation algorithm that embeds scale-free graphs in the hyperbolic space. All previous approaches of similar embedding algorithms require a runtime of Ω(n2). Our algorithm achieves quasilinear runtime, which makes it the first(More)
This paper compares a number of recently proposed models for computing context sensitive word similarity. We clarify the connections between these models, simplify their formulation and evaluate them in a unified setting. We show that the models are essentially equivalent if syntactic information is ignored, and that the substantial performance differences(More)