Share This Author
SIAM Journal on Optimization
The SIAM Journal on Optimization contains research articles on the theory and practice of optimization. The areas addressed include linear and quadratic programming, convex programming, nonlinear…
Smoothing Techniques for Computing Nash Equilibria of Sequential Games
- S. Hoda, Andrew Gilpin, Javier F. Pena, T. Sandholm
- Computer Science, EconomicsMath. Oper. Res.
- 1 May 2010
This work develops first-order smoothing techniques for saddle-point problems that arise in finding a Nash equilibrium of sequential games and introduces heuristics that significantly speed up the algorithm, and decomposed game representations that reduce the memory requirements, enabling the application of the techniques to drastically larger games.
Optimal Regularized Dual Averaging Methods for Stochastic Optimization
A novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss is developed.
Computing the Stability Number of a Graph Via Linear and Semidefinite Programming
This work is based on and refines de Klerk and Pasechnik’s approach to approximating the stability number via copositive programming and provides a closed-form expression for the values computed by the linear programming approximations.
A Conic Programming Approach to Generalized Tchebycheff Inequalities
Relying on a general approximation scheme for conic programming, it is shown that optimal bounds on the expected value of piecewise polynomials over all measures with a given set of moments can be numerically computed or approximated via semidefinite programming.
First-Order Algorithm with O(ln(1/e)) Convergence for e-Equilibrium in Two-Person Zero-Sum Games
An iterated version of Nesterov's first-order smoothing method for the two-person zero-sum game equilibrium problem, supplemented with an outer loop that lowers the target E between iterations (this target affects the amount of smoothing in the inner loop), which yields an exponential speed improvement.
Completely positive reformulations for polynomial optimization
This work provides a general characterization of the class of polynomial optimization problems that can be formulated as a conic program over the cone of completely positive tensors, and shows that recent related results for quadratic problems can be further strengthened and generalized to higher order polynomials optimization problems.
Polytope Conditioning and Linear Convergence of the Frank-Wolfe Algorithm
For a convex quadratic objective, it is shown that the rate of convergence is determined by a condition number of a suitably scaled polytope, and new insight is given into the linear convergence property.
A Smooth Perceptron Algorithm
A modified version of the perceptron algorithm is proposed that retains the algorithm's original simplicity but has a substantially improved convergence rate.
A deterministic rescaled perceptron algorithm
A version of the perceptron algorithm that includes a periodic rescaling of the ambient space that is simpler and shorter and does not require randomization or deep separation oracles.