• Publications
  • Influence
Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design
TLDR
We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Expand
  • 1,152
  • 264
  • PDF
Cost-effective outbreak detection in networks
TLDR
We exploit submodularity to develop an efficient algorithm that scales to large problems, achieving near optimal placements, while being 700 times faster than a simple greedy algorithm. Expand
  • 1,940
  • 209
  • PDF
Near-Optimal Sensor Placements in Gaussian Processes: Theory, Efficient Algorithms and Empirical Studies
TLDR
We solve the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected by exploiting the submodularity of mutual information. Expand
  • 1,208
  • 134
  • PDF
Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization
TLDR
We introduce the concept of adaptive submodularity, generalizing submodular set functions to adaptive policies. Expand
  • 439
  • 81
  • PDF
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
TLDR
We analyze an intuitive Gaussian process upper confidence bound algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Expand
  • 438
  • 76
  • PDF
Submodular Function Maximization
TLDR
We introduce submodularity and some of its generalizations, illustrate how it arises in various applications, and discuss algorithms for optimizing submodular functions. Expand
  • 574
  • 59
  • PDF
Advances in Neural Information Processing Systems (NIPS)
  • 423
  • 54
Streaming submodular maximization: massive data summarization on the fly
TLDR
We develop the first efficient streaming algorithm with constant factor 1/2-ε approximation guarantee to the optimum solution, requiring only a single pass through the data, and memory independent of data size. Expand
  • 197
  • 45
  • PDF
Near-optimal sensor placements in Gaussian processes
TLDR
We propose a new optimization criterion, mutual information, that seeks to find sensor placements that are most informative about unsensed locations. Expand
  • 470
  • 43
  • PDF
Robust Submodular Observation Selection
In many applications, one has to actively select among a set of expensive observations before making an informed decision. For example, in environmental monitoring, we want to select locations toExpand
  • 232
  • 38
  • PDF