Adaptive Submodular Ranking

@inproceedings{Kambadur2017AdaptiveSR,
  title={Adaptive Submodular Ranking},
  author={Prabhanjan Kambadur and Viswanath Nagarajan and Fatemeh Navidi},
  booktitle={IPCO},
  year={2017}
}
We study a general adaptive ranking problem where an algorithm needs to perform a sequence of actions on a random user, drawn from a known distribution, so as to "satisfy" the user as early as possible. The satisfaction of each user is captured by an individual submodular function, where the user is said to be satisfied when the function value goes above some threshold. We obtain a logarithmic factor approximation algorithm for this adaptive ranking problem, which is the best possible. The… 
Adaptive Submodular Ranking and Routing
TLDR
A logarithmic factor approximation algorithm is obtained for this adaptive ranking problem where an algorithm needs to adaptively select a sequence of elements so as to "cover" a random scenario at minimum expected cost.
Scenario Submodular Cover
TLDR
This work gives two approximation algorithms for Scenario Submodular Cover, and applies these algorithms to a new problem, Scenario Boolean Function Evaluation, which has applciations to other problems involving distributions that are explicitly specified by their support.
Optimal Decision Tree with Noisy Outcomes
  • Su Jia
  • Computer Science, Mathematics
  • 2019
TLDR
These new approximation algorithms provide guarantees that are nearly best-possible and work for the general case of a large number of noisy outcomes per test or per hypothesis where the performance degrades smoothly with this number.
Optimal Decision Tree with Noisy Outcomes
  • Su Jia
  • Computer Science, Mathematics
  • 2019
TLDR
These new approximation algorithms provide guarantees that are nearly best-possible and work for the general case of a large number of noisy outcomes per test or per hypothesis where the performance degrades smoothly with this number.
Revisiting the Approximation Bound for Stochastic Submodular Cover
TLDR
A k(ln R + 1) approximation bound for Stochastic Submodular Cover, where k is the state set size, R is the maximum utility of a single item, and the utility function is integer-valued, is presented.
Optimal Decision Tree with Noisy Outcomes
TLDR
This work designs new approximation algorithms that provide guarantees that are nearly best-possible and work for the general case of a large number of noisy outcomes per test or per hypothesis, and evaluates the performance of these algorithms on two natural applications with noise.
Stochastic Submodular Cover with Limited Adaptivity
TLDR
It is shown that for any integer r, there exists a poly-time adaptive algorithm for stochastic submodular cover whose expected cost is $\tilde{O}(Q^{{1}/{r}})$ times the expected cost of a fully adaptive algorithm.
Approximating Pandora's Box with Correlations
TLDR
This work presents a general reduction to a simpler version of Pandora’s Box, that only asks to find a value below a certain threshold, and eliminates the need to reason about future values that will arise during the search.
Approximation Algorithms for Stochastic k-TSP
TLDR
This work considers the stochastic $k-TSP problem where rewards at vertices are random and the objective is to minimize the expected length of a tour that collects reward $k$ and presents an adaptive and non-adaptive O(\log k)-approximation algorithm.
The Stochastic Score Classification Problem
TLDR
This work provides approximation algorithms for adaptive and non-adaptive versions of the Stochastic Score Classification Problem, and poses a number of open questions.

References

SHOWING 1-10 OF 47 REFERENCES
Adaptive Submodularity: A New Approach to Active Learning and Stochastic Optimization
TLDR
The concept of adaptive submodularity is introduced, generalizing submodular set functions to adaptive policies and it is proved that if a problem satisfies this property, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy.
Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization
TLDR
It is proved that if a problem satisfies adaptive submodularity, a simple adaptive greedy algorithm is guaranteed to be competitive with the optimal policy, providing performance guarantees for both stochastic maximization and coverage.
Scenario Submodular Cover
TLDR
This work gives two approximation algorithms for Scenario Submodular Cover, and applies these algorithms to a new problem, Scenario Boolean Function Evaluation, which has applciations to other problems involving distributions that are explicitly specified by their support.
Submodular meets Structured: Finding Diverse Subsets in Exponentially-Large Structured Item Sets
TLDR
This work shows via examples that when marginal gains of submodular diversity functions allow structured representations, this enables efficient (sub-linear time) approximate maximization by reducing the greedy augmentation step to inference in a factor graph with appropriately constructed HOPs.
A constant factor approximation algorithm for generalized min-sum set cover
TLDR
A simple randomized constant factor approximation algorithm is given for the generalized min-sum set cover problem, which is given a universe of elements and a collection of subsets with each set S having a covering requirement.
Multiple intents re-ranking
TLDR
The multiple intents re-ranking problem, which captures scenarios in which some user makes a query, and there is no information about its real search intent, is introduced and an O(log r)-approximation algorithm is presented, where r is the maximum number of search results that are relevant to any user type.
Near-Optimal Bayesian Active Learning with Noisy Observations
TLDR
EC2 is developed, a novel, greedy active learning algorithm and it is proved that it is competitive with the optimal policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations.
A Class of Submodular Functions for Document Summarization
TLDR
A class of submodular functions meant for document summarization tasks which combine two terms, one which encourages the summary to be representative of the corpus, and the other which positively rewards diversity, which means that an efficient scalable greedy optimization scheme has a constant factor guarantee of optimality.
Approximation Algorithms for Optimal Decision Trees and Adaptive TSP Problems
TLDR
The first poly-logarithmic approximation is given, and it is shown that this algorithm is best possible unless the approximation guarantees for the well-known group Steiner tree problem are improved.
Average-Case Active Learning with Costs
TLDR
This analysis extends previous work to a more general setting in which different queries have different costs, and discusses an approximate version of interest when there are very many queries.
...
...