• Corpus ID: 233481260

Regret and Cumulative Constraint Violation Analysis for Distributed Online Constrained Convex Optimization

@article{Yi2021RegretAC,
  title={Regret and Cumulative Constraint Violation Analysis for Distributed Online Constrained Convex Optimization},
  author={Xinlei Yi and Xiuxian Li and Tao Yang and Lihua Xie and Tianyou Chai and Karl Henrik Johansson},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.00321}
}
This paper considers the distributed online convex optimization problem with time-varying constraints over a network of agents. This is a sequential decision making problem with two sequences of arbitrarily varying convex loss and constraint functions. At each round, each agent selects a decision from the decision set, and then only a portion of the loss function and a coordinate block of the constraint function at this round are privately revealed to this agent. The goal of the network is to… 

Figures and Tables from this paper

A Survey of Decentralized Online Learning

TLDR
A thorough overview of DOL from the perspective of problem settings, communication, computation, and performances is provided and some potential future directions are also discussed in details.

Regret and Cumulative Constraint Violation Analysis for Online Convex Optimization with Long Term Constraints

TLDR
This paper considers online convex optimization with long term constraints, where constraints can be violated in intermediate rounds, but need to be satisfied in the long run to achieve the optimal regret with respect to any comparator sequence.

References

SHOWING 1-10 OF 57 REFERENCES

On Distributed Online Convex Optimization with Sublinear Dynamic Regret and Fit

TLDR
This work considers a distributed online convex optimization problem, with time-varying (potentially adversarial) constraints, and proposes a distributed primal-dual mirror descent-based algorithm, in which the primal and dual updates are carried out locally at all the nodes.

Distributed Online Optimization With Long-Term Constraints

TLDR
The proposed regret scalings match those obtained by state-of-the-art algorithms and fundamental limits in the corresponding centralized online optimization problem (for both convex and strongly convex loss functions).

Distributed Online Linear Regression

TLDR
Online linear regression problems in a distributed setting, where the data is spread over a network, are studied, involving, at each node and in each round, a local gradient descent step and a communication and averaging step where nodes aim at aligning their predictors to those of their neighbors.

An Adaptive Primal-Dual Subgradient Algorithm for Online Distributed Constrained Optimization

TLDR
This paper presents a consensus-based adaptive primal-dual subgradient algorithm that removes the need for knowing the total number of iterations and allows a novel tradeoffs between the regret and the violation of constraints.

Online Convex Optimization for Cumulative Constraints

We propose the algorithms for online convex optimization which lead to cumulative squared constraint violations of the form $\sum\limits_{t=1}^T\big([g(x_t)]_+\big)^2=O(T^{1-\beta})$, where

Distributed Bandit Online Convex Optimization With Time-Varying Coupled Inequality Constraints

TLDR
It is shown that sublinear expected regret and constraint violation are achieved by these two algorithms, if the accumulated variation of the comparator sequence also grows sublinearly.

Distributed Online Convex Optimization With Time-Varying Coupled Inequality Constraints

TLDR
This paper proves that the distributed online primal-dual dynamic mirror descent algorithm achieves sublinear dynamic regret and constraint violation if the accumulated dynamic variation of the optimal sequence also grows sublinearly, and achieves smaller bounds on the constraint violation.

Online Convex Optimization With Time-Varying Constraints and Bandit Feedback

TLDR
It is shown that the algorithm possesses sublinear regret with respect to the dynamic benchmark sequence and sublinear constraint violations, as long as the drift of the benchmark sequence is sublinear, or in other words, the underlying dynamic optimization problems do not vary too drastically.

Online Convex Optimization with Stochastic Constraints

This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional

Safety-Aware Algorithms for Adversarial Contextual Bandit

TLDR
This work develops a meta algorithm leveraging online mirror descent for the full information setting and extends it to contextual bandit with risk constraints setting using expert advice, which can achieve near-optimal regret in terms of minimizing the total cost.
...