• Corpus ID: 235436202

Improved Regret Bounds for Online Submodular Maximization

@article{Sadeghi2021ImprovedRB,
  title={Improved Regret Bounds for Online Submodular Maximization},
  author={Omid Sadeghi and Prasanna Sanjay Raut and Maryam Fazel},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.07836}
}
In this paper, we consider an online optimization problem over T rounds where at each step t ∈ [T ], the algorithm chooses an action xt from the fixed convex and compact domain set K. A utility function ft(·) is then revealed and the algorithm receives the payoff ft(xt). This problem has been previously studied under the assumption that the utilities are adversarially chosen monotone DR-submodular functions and O( √ T ) regret bounds have been derived. We first characterize the class of… 
2 Citations

Figures and Tables from this paper

Faster First-Order Algorithms for Monotone Strongly DR-Submodular Maximization

TLDR
This paper proposes a new algorithm that matches the provably optimal 1− c e approximation ratio after only dLμ e iterations, and studies the Projected Gradient Ascent method for this problem, and provides a refined analysis of the algorithm with an improved 1 1+c approximation ratio and a linear convergence rate.

Fast First-Order Methods for Monotone Strongly DR-Submodular Maximization

TLDR
This paper proposes the SDRFW algorithm that matches the provably optimal 1− c e approximation ratio after only d L μ e iterations, and provides a novel characterization of L for DR-submodular functions showing that in many cases, computing L could be formulated as a convex optimization problem that could be solved efficiently.

References

SHOWING 1-10 OF 30 REFERENCES

A Single Recipe for Online Submodular Maximization with Adversarial or Stochastic Constraints

TLDR
The results not only improve upon the existing bounds under linear cumulative constraints, but also give the first sub-linear bounds for general convex long-term constraints.

Online Continuous Submodular Maximization

TLDR
This paper proposes a variant of the Frank-Wolfe algorithm that has access to the full gradient of the objective functions and shows that it achieves a regret bound of $O(\sqrt{T})$ (where $T$ is the horizon of the online optimization problem) against a $(1-1/e)-approximation to the best feasible solution in hindsight.

Logarithmic regret algorithms for online convex optimization

TLDR
Several algorithms achieving logarithmic regret are proposed, which besides being more general are also much more efficient to implement, and give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field.

Online Submodular Maximization under a Matroid Constraint with Application to Learning Assignments

TLDR
This work presents an efficient algorithm for this general problem of ad allocation with submodular utilities and analyzes it in the no-regret model, and presents a second algorithm that handles the more general case in which the feasible sets are given by a matroid constraint, while still maintaining a 1 1=e asymptotic performance ratio.

Online Convex Optimization in the Random Order Model

TLDR
This work considers a natural random-order version of the OCO model, in which the adversary can choose the set of loss functions, but does not get to choose the order in which they are supplied to the learner; Instead, they are observed in uniformly random order.

Continuous DR-submodular Maximization: Structure and Algorithms

TLDR
This work investigates the problem of maximizing non-monotone DR-submodular continuous functions under general down-closed convex constraints by investigating geometric properties that underlie such objectives, and devise two optimization algorithms with provable guarantees that are validated on synthetic and real-world problem instances.

A Modern Introduction to Online Learning

TLDR
This monograph introduces the basic concepts of Online Learning through a modern view of Online Convex Optimization, and presents first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings.

Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains

TLDR
The weak DR property is introduced that gives a unified characterization of submodularity for all set, integer-lattice and continuous functions and for maximizing monotone DR-submodular continuous functions under general down-closed convex constraints, a Frank-Wolfe variant with approximation guarantee, and sub-linear convergence rate are proposed.

Online Continuous DR-Submodular Maximization with Long-Term Budget Constraints

TLDR
The notion of regret is modified by comparing the agent against a $(1-\frac{1}{e})$-approximation to the best fixed decision in hindsight which satisfies the budget constraint proportionally over any window of length $W$.

The adwords problem: online keyword matching with budgeted bidders under random permutations

TLDR
The problem of a search engine trying to assign a sequence of search keywords to a set of competing bidders, each with a daily spending limit, is considered, and the current literature on this problem is extended by considering the setting where the keywords arrive in a random order.