Corpus ID: 229349126

Projection-Free Bandit Optimization with Privacy Guarantees

  title={Projection-Free Bandit Optimization with Privacy Guarantees},
  author={Alina Ene and Huy L. Nguyen and Adrian Vladu},
We design differentially private algorithms for the bandit convex optimization problem in the projection-free setting. This setting is important whenever the decision set has a complex geometry, and access to it is done efficiently only through a linear optimization oracle, hence Euclidean projections are unavailable (e.g. matroid polytope, submodular base polytope). This is the first differentially-private algorithm for projection-free bandit optimization, and in fact our bound matches the… Expand
Littlestone Classes are Privately Online Learnable
The results strengthen this connection and show that an online learning algorithm can in fact be directly privatized (in the realizable setting) and provide the first non-trivial regret bound for therealizable setting. Expand


Projection-Free Bandit Convex Optimization
This paper shows that the first computationally efficient projection-free algorithm for bandit convex optimization (BCO) achieves a sublinear regret of $O(nT^{4/5})$ for any bounded convex functions with uniformly bounded gradients. Expand
Improved Regret Bounds for Projection-free Bandit Convex Optimization
The challenge of designing online algorithms for the bandit convex optimization problem (BCO) is revisited and the first such algorithm that attains expected regret is presented, using only overall calls to the linear optimization oracle, in expectation, where T is the number of prediction rounds. Expand
(Nearly) Optimal Algorithms for Private Online Learning in Full-information and Bandit Settings
The technique leads to the first nonprivate algorithms for private online learning in the bandit setting, and in many cases, the algorithms match the dependence on the input length of the optimal nonprivate regret bounds up to logarithmic factors in T. Expand
Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds
This work provides new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. Expand
Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback
The multi-point bandit setting, in which the player can query each loss function at multiple points, is introduced, and regret bounds that closely resemble bounds for the full information case are proved. Expand
Playing Non-linear Games with Linear Oracles
  • D. Garber, Elad Hazan
  • Mathematics, Computer Science
  • 2013 IEEE 54th Annual Symposium on Foundations of Computer Science
  • 2013
This work gives the first efficient decision making algorithm with optimal regret guarantees, answering an open question of Kalai and Vempala, Hazan and Kale, and gives an extension of the algorithm for the partial information setting, i.e. the "bandit" model. Expand
Differentially Private Empirical Risk Minimization Revisited: Faster and More General
The expected excess empirical risk from convex loss functions to non-convex ones satisfying the Polyak-Lojasiewicz condition is generalized and a tighter upper bound on the utility is given. Expand
The Price of Bandit Information for Online Optimization
This paper presents an algorithm which achieves O*(n3/2 √T) regret and presents lower bounds showing that this gap is at least √n, which is conjecture to be the correct order. Expand
Differentially Private Online Learning
This paper provides a general framework to convert the given algorithm into a privacy preserving OCP algorithm with good (sub-linear) regret, and shows that this framework can be used to provide differentially private algorithms for offline learning as well. Expand
Robbing the bandit: less regret in online geometric optimization against an adaptive adversary
It is proved that, for a large class of full-information online optimization problems, the optimal regret against an adaptive adversary is the same as against a non-adaptive adversary. Expand