Corpus ID: 53022996

Adversarially Robust Optimization with Gaussian Processes

@inproceedings{Bogunovic2018AdversariallyRO,
  title={Adversarially Robust Optimization with Gaussian Processes},
  author={Ilija Bogunovic and Jonathan Scarlett and Stefanie Jegelka and Volkan Cevher},
  booktitle={NeurIPS},
  year={2018}
}
In this paper, we consider the problem of Gaussian process (GP) optimization with an added robustness requirement: The returned point may be perturbed by an adversary, and we require the function value to degrade as little as possible as a result. [...] Key Method We show that standard GP optimization algorithms do not exhibit the desired robustness properties, and give a novel confidence bound based algorithm StableOPT for this purpose.Expand
Adversarial Robustness Guarantees for Gaussian Processes
TLDR
The empirical results suggest that the adversarial robustness of GPs increases with accurate posterior estimation, and a branch-and-bound scheme to refine the bounds and show that the algorithm is guaranteed to converge to values -close to the actual values in finitely many iterations. Expand
Corruption-Tolerant Gaussian Process Bandit Optimization
TLDR
It is observed that distinct algorithmic ideas are required depending on whether one is required to perform well in both the corrupted and non-corrupted settings, and whether the corruption level is known or not. Expand
On Lower Bounds for Standard and Robust Gaussian Process Bandit Optimization
In this paper, we consider algorithm-independent lower bounds for the problem of black-box optimization of functions having a bounded norm is some Reproducing Kernel Hilbert Space (RKHS), which canExpand
Efficient Batch Black-box Optimization with Deterministic Regret Bounds
TLDR
A novel batch optimization algorithm is proposed, which jointly maximizes the acquisition function and select points from a whole batch in a holistic way and derive regret bounds for both the noise-free and perturbation settings irrespective of the choice of kernel. Expand
Adversarial Robustness Guarantees for Classification with Gaussian Processes
TLDR
The empirical analysis suggests that GPC robustness increases with more accurate posterior estimation, and the method applies to experimentally investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset. Expand
Mixed Strategies for Robust Optimization of Unknown Objectives
TLDR
A novel sample-efficient algorithm GP-MRO is designed, which sequentially learns about the unknown objective from noisy point evaluations, that seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value. Expand
Robust Adaptive Decision Making: Bayesian Optimization and Beyond
TLDR
This dissertation considers four research directions with the goal of enhancing robust and adaptive decision making in Bayesian optimization and associated problems, and proposes a novel robust confidence-bound based algorithm that treats BO and LSE in a unified fashion. Expand
Min-Max Optimization without Gradients: Convergence and Applications to Black-Box Evasion and Poisoning Attacks
TLDR
A principled optimization framework, integrating a zeroth-order (ZO) gradient estimator with an alternating projected stochastic gradient descent-ascent method, where the former only requires a small number of function queries and the later needs just one-step descent/ascent update. Expand
Active learning for distributionally robust level-set estimation
TLDR
The proposed theoretically grounded and computationally efficient active learning method for efficiently identifying a reliable set H, which is defined as a region in which the DRPTR measure exceeds a certain desired probability α, can be interpreted as a level set estimation (LSE) problem forDRPTR. Expand
Saddle point optimization with approximate minimization oracle
TLDR
A heuristic approach to adapt η derivative-free and implement zero-order and first-order minimization algorithms and reveals a possible shortcoming of recently developed robust adversarial reinforcement learning algorithms. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 40 REFERENCES
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
TLDR
This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions. Expand
Robust Maximization of Non-Submodular Objectives
TLDR
A new algorithm Oblivious-Greedy is presented and the first constant-factor approximation guarantees for a wider class of non-submodular objectives are proved, i.e. when the number of deletions $\tau$ is linear in $k$. Expand
Robust Submodular Observation Selection
In many applications, one has to actively select among a set of expensive observations before making an informed decision. For example, in environmental monitoring, we want to select locations toExpand
Parallel Gaussian Process Optimization with Upper Confidence Bound and Pure Exploration
TLDR
The Gaussian Process Upper Confidence Bound and Pure exploration algorithm (GP-UCB-PE) is introduced which combines the UCB strategy and Pure Exploration in the same batch of evaluations along the parallel iterations and proves theoretical upper bounds on the regret with batches of size K for this procedure. Expand
Safe Exploration for Optimization with Gaussian Processes
TLDR
This work develops an efficient algorithm called SAFEOPT, and theoretically guarantees its convergence to a natural notion of optimum reachable under safety constraints, as well as two real applications: movie recommendation, and therapeutic spinal cord stimulation. Expand
Distributionally Robust Submodular Maximization
TLDR
This paper achieves better performance on the actual underlying function $f$ by directly optimizing a combination of bias and variance, and shows compelling empirical evidence that DRO improves generalization to the unknown stochastic submodular function. Expand
Robust Optimization for Non-Convex Objectives
TLDR
A reduction from robust improper optimization to Bayesian optimization is developed: given an oracle that returns $\alpha$-approximate solutions for distributions over objectives, it is shown that de-randomizing this solution is NP-hard in general, but can be done for a broad class of statistical learning tasks. Expand
Certifying Some Distributional Robustness with Principled Adversarial Training
TLDR
This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data and efficiently certify robustness for the population loss by considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball. Expand
Batch Bayesian Optimization via Local Penalization
TLDR
A simple heuristic based on an estimate of the Lipschitz constant is investigated that captures the most important aspect of this interaction at negligible computational overhead and compares well, in running time, with much more elaborate alternatives. Expand
Certifiable Distributional Robustness with Principled Adversarial Training
TLDR
This work provides a training procedure that augments model parameter updates with worst-case perturbations of training data by considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball. Expand
...
1
2
3
4
...