Playing Non-linear Games with Linear Oracles

@article{Garber2013PlayingNG,
  title={Playing Non-linear Games with Linear Oracles},
  author={D. Garber and Elad Hazan},
  journal={2013 IEEE 54th Annual Symposium on Foundations of Computer Science},
  year={2013},
  pages={420-428}
}
  • D. Garber, Elad Hazan
  • Published 2013
  • Mathematics, Computer Science
  • 2013 IEEE 54th Annual Symposium on Foundations of Computer Science
Linear optimization is many times algorithmically simpler than non-linear convex optimization. Linear optimization over matroid polytopes, matching polytopes and path polytopes are example of problems for which we have efficient combinatorial algorithms, but whose non-linear convex counterpart is harder and admit significantly less efficient algorithms. This motivates the computational model of online decision making and optimization using a linear optimization oracle. In this computational… Expand
Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
TLDR
This work shows that when the sequence of loss functions is predictable, a simple modification of FTPL which incorporates optimism can achieve better regret guarantees, while retaining the optimal worst-case regret guarantee for unpredictable sequences. Expand
Frank-Wolfe with a Nearest Extreme Point Oracle
TLDR
For many 0–1 polytopes, under quadratic growth and strict complementarity conditions, the first linearly convergent variant with rate that depends only on the dimension of the optimal face and not on the ambient dimension is obtained. Expand
Two-Player Games for Efficient Non-Convex Constrained Optimization
TLDR
It is proved that this proxy-Lagrangian formulation, instead of having unbounded size, can be taken to be a distribution over no more than m+1 models (where m is the number of constraints), which is a significant improvement in practical terms. Expand
Projection-Free Bandit Optimization with Privacy Guarantees
TLDR
This is the first differentially-private algorithm for projection-free bandit optimization, and in fact its bound matches the best known non-private projection- free algorithm and the bestknown private algorithm, even for the weaker setting when projections are available. Expand
Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
TLDR
Algorithms that can solve non-convex constrained optimization problems with possibly non-differentiable and non- Convex constraints with theoretical guarantees are provided. Expand
Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets
TLDR
This paper proves that the vanila FW method converges at a rate of 1/t2, and shows that various balls induced by lp norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. Expand
Tight Bounds for Approximate Carathéodory and Beyond
TLDR
The result provides a constructive proof for the Approximate Carath-eodory Problem, which states that any point inside a polytope contained in the ball of radius $D$ can be approximated to within $\epsilon$ in $\ell_p$ norm by a convex combination of only $O\left(D^2 p/\ep silon^2\right)$ vertices of the polytopes for $p \geq 2$. Expand
On lower complexity bounds for large-scale smooth convex optimization
We derive lower bounds on the black-box oracle complexity of large-scale smooth convex minimization problems, with emphasis on minimizing smooth (with Holder continuous, with a given exponent andExpand
Shortest paths, Markov chains, matrix scaling and beyond : improved algorithms through the lens of continuous optimization
TLDR
This thesis develops a faster algorithm for the unit capacity minimum cost flow problem, which encompasses the shortest path with negative weights and minimum cost bipartite perfect matching problems, and develops faster algorithms for scaling and balancing nonnegative matrices, two fundamental problems in scientific computing. Expand
Faster Projection-free Convex Optimization over the Spectrahedron
  • D. Garber
  • Computer Science, Mathematics
  • NIPS
  • 2016
TLDR
This work presents the first result that attains provably faster convergence rates for a CG variant for optimization over the spectrahedron, and presents encouraging preliminary empirical results. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 22 REFERENCES
Efficient algorithms for online decision problems
TLDR
This work gives a simple approach for doing nearly as well as the best single decision, where the best is chosen with the benefit of hindsight, and these follow-the-leader style algorithms extend naturally to a large class of structured online problems for which the exponential algorithms are inefficient. Expand
Online Convex Programming and Generalized Infinitesimal Gradient Ascent
TLDR
An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent. Expand
Sparse convex optimization methods for machine learning
TLDR
A convergence proof guaranteeing e-small error is given after O( 1e ) iterations, and the sparsity of approximate solutions for any `1-regularized convex optimization problem (and for optimization over the simplex), expressed as a function of the approximation quality. Expand
Playing Games with Approximation Algorithms
TLDR
The main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm, which shows how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm with a polynomial blowup in runtime. Expand
The convex optimization approach to regret minimization
A well studied and general setting for prediction and decision making is regret minimization in games. Recently the design of algorithms in this setting has been influenced by tools from convexExpand
A simple polynomial-time rescaling algorithm for solving linear programs
TLDR
It is shown that a randomized version of the perceptron algorithm along with periodic rescaling runs in polynomial-time, and the resulting algorithm for linear programming has an elementary description and analysis. Expand
Online Learning and Online Convex Optimization
TLDR
A modern overview of online learning is provided to give the reader a sense of some of the interesting ideas and in particular to underscore the centrality of convexity in deriving efficient online learning algorithms. Expand
Projection-free Online Learning
TLDR
This work presents efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique, and obtains a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic online smooth conveX optimization. Expand
A regularization of the Frank—Wolfe method and unification of certain nonlinear programming methods
  • A. Migdalas
  • Mathematics, Computer Science
  • Math. Program.
  • 1994
TLDR
A regularization penalty term is added to the objective of the direction generating subproblem, which results in a generic feasible direction method which also includes certain known nonlinear programming methods. Expand
Combinatorial optimization. Polyhedra and efficiency.
TLDR
This book shows the combinatorial optimization polyhedra and efficiency as your friend in spending the time in reading a book. Expand
...
1
2
3
...