• Corpus ID: 173990856

Online Convex Optimization with Perturbed Constraints.

@article{Valls2019OnlineCO,
  title={Online Convex Optimization with Perturbed Constraints.},
  author={V{\'i}ctor Valls and George Iosifidis and Douglas J. Leith and Leandros Tassiulas},
  journal={arXiv: Optimization and Control},
  year={2019}
}
This paper addresses Online Convex Optimization (OCO) problems where the constraints have additive perturbations that (i) vary over time and (ii) are not known at the time to make a decision. Perturbations may not be i.i.d. generated and can be used to model a time-varying budget or commodity in resource allocation problems. The problem is to design a policy that obtains sublinear regret while ensuring that the constraints are satisfied on average. To solve this problem, we present a primal… 

Figures and Tables from this paper

Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated

Online Convex Optimization with Stochastic Constraints

This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional

Trading regret for efficiency: online convex optimization with long term constraints

This paper proposes an efficient algorithm which achieves O(√T) regret bound and O(T3/4) bound on the violation of constraints and proposes a multipoint bandit feedback algorithm with the same bounds in expectation as the first algorithm.

Online Convex Optimization with Time-Varying Constraints

An online algorithm is developed that solves the problem with O(1/\epsilon^2)$ convergence time in the special case when all constraint functions are nonpositive over a common subset of $\mathbb{R}^n$.

Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods

This work provides estimates on the primal infeasibility and primal suboptimality of the generated approximate primal solutions and provides a basis for analyzing the trade-offs between the desired level of error and the selection of the stepsize value.

Online Learning with Sample Path Constraints

This work defines the reward-in-hindsight as the highest reward the decision maker could have achieved, while satisfying the constraints, had she known Nature's choices in advance, and provides an explicit strategy that attains this convex hull.

Fast Algorithms for Online Stochastic Convex Programming

The techniques make explicit the connection of primal-dual paradigm and online learning to online stochastic CP, and present fast algorithms for these problems, which achieve near-optimal regret guarantees for both the i.i.d. and the random permutation models of Stochastic inputs.

Online Convex Programming and Generalized Infinitesimal Gradient Ascent

An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent.

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization

This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.

Stochastic Gradient Descent with Only One Projection

This work develops novel stochastic optimization algorithms that do not need intermediate projections to obtain a feasible solution in the given domain and achieves an O(1/√T) convergence rate for general convex optimization and a O(ln T/T) rate for strongly conveX optimization under mild conditions about the domain and the objective function.