Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
- Niranjan Srinivas, Andreas Krause, S. Kakade, M. Seeger
- Computer ScienceInternational Conference on Machine Learning
- 20 December 2009
This work analyzes GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Cost-effective outbreak detection in networks
- J. Leskovec, Andreas Krause, Carlos Guestrin, C. Faloutsos, J. Vanbriesen, N. Glance
- Computer ScienceKnowledge Discovery and Data Mining
- 12 August 2007
This work exploits submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm and achieving speedups and savings in storage of several orders of magnitude.
Near-Optimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms and Empirical Studies
- Andreas Krause, A. Singh, Carlos Guestrin
- Computer ScienceJournal of machine learning research
- 1 June 2008
It is proved that the problem of finding the configuration that maximizes mutual information is NP-complete, and a polynomial-time approximation is described that is within (1-1/e) of the optimum by exploiting the submodularity of mutual information.
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
- Niranjan Srinivas, Andreas Krause, S. Kakade, M. Seeger
- Computer ScienceIEEE Transactions on Information Theory
- 1 May 2012
This work analyzes an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design and obtaining explicit sublinear regret bounds for many commonly used covariance functions.
Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization
- D. Golovin, Andreas Krause
- Computer ScienceJournal of Artificial Intelligence Research
- 21 March 2010
It is proved that if a problem satisfies adaptive submodularity, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy, providing performance guarantees for both stochastic maximization and coverage.
Advances in Neural Information Processing Systems (NIPS)
- Hemant Tyagi, B. Gärtner, Andreas Krause
- Computer Science
- 13 December 2014
Submodular Function Maximization
- Andreas Krause, D. Golovin
- Computer ScienceTractability
- 2014
This survey will introduce submodularity and some of its generalizations, illustrate how it arises in various applications, and discuss algorithms for optimizing submodular functions.
Streaming submodular maximization: massive data summarization on the fly
- Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, Andreas Krause
- Computer ScienceKnowledge Discovery and Data Mining
- 24 August 2014
This paper develops the first efficient streaming algorithm with constant factor 1/2-ε approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size.
Parallelizing Exploration-Exploitation Tradeoffs with Gaussian Process Bandit Optimization
- Thomas Desautels, Andreas Krause, J. Burdick
- Computer ScienceInternational Conference on Machine Learning
- 26 June 2012
This work develops GP-BUCB, a principled algorithm for choosing batches, based on the GP-UCB algorithm for sequential GP optimization, and proves a surprising result; as compared to the sequential approach, the cumulative regret of the parallel algorithm only increases by a constant factor independent of the batch size B.
Robust Submodular Observation Selection
- Andreas Krause, H. B. McMahan, Carlos Guestrin, Anupam Gupta
- Computer Science
- 2008
This paper presents the Submodular Saturation algorithm, a simple and efficient algorithm with strong theoretical approximation guarantees for cases where the possible objective functions exhibit submodularity, an intuitive diminishing returns property, and proves that better approximation algorithms do not exist unless NP-complete problems admit efficient algorithms.
...
...