• Publications
  • Influence
Guarantees for Greedy Maximization of Non-submodular Functions with Applications
TLDR
It is proved that GREEDY enjoys a tight approximation guarantee of 1/α (1 - e-γα) for cardinality constrained maximization and the submod-ularity ratio and curvature is bound for several important real-world objectives, including the Bayesian A-optimality objective and certain linear programs with combinatorial constraints. Expand
Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck
TLDR
This work proposes Selective Noise Injection (SNI), which maintains the regularizing effect the injected noise has, while mitigating the adverse effects it has on the gradient quality, and demonstrates that the Information Bottleneck is a particularly well suited regularization technique for RL as it is effective in the low-data regime encountered early on in training RL agents. Expand
Learning Mixtures of Submodular Functions for Image Collection Summarization
TLDR
This paper provides the first systematic approach for quantifying the problem of image collection summarization, along with a new data set of image collections and human summaries, and introduces a novel summary evaluation method called V-ROUGE. Expand
EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE
TLDR
In EDDI, a novel partial variational autoencoder to predict missing data entries problematically given any subset of the observed ones, and combine it with an acquisition function that maximizes expected information gain on a set of target variables is proposed. Expand
Introduction to Probabilistic Graphical Models
TLDR
This tutorial provides an introduction to probabilistic graphical models and discusses maximum likelihood and Bayesian learning, as well as generative and discriminative learning, and typical applications for each of the three representations. Expand
Differentiable Submodular Maximization
TLDR
The error made by the approach is theoretically characterized, yielding insights into the trade-off of smoothness and accuracy and the effectiveness of the approach for jointly learning and optimizing on synthetic maxcut data, and on a real world product recommendation application. Expand
On Theoretical Properties of Sum-Product Networks
TLDR
It is shown that the weights of any complete and consistent SPN can be transformed into locally normalized weights without changing the SPN distribution, and that consistent SPNs cannot model distributions significantly (exponentially) more compactly than decomposable SPNs. Expand
Bayesian Network Classifiers with Reduced Precision Parameters
TLDR
This paper investigates the effect of precision reduction of the parameters on the classification performance of Bayesian network classifiers and indicates that BNCs with discriminatively optimized parameters are almost as robust to precision reduction as BNC’s with generatively optimize parameters. Expand
Successor Uncertainties: exploration and uncertainty in temporal difference learning
TLDR
Successor Uncertainties (SU), a cheap and easy to implement RVF algorithm that retains key properties of PSRL, is designed and outperforms its closest RVF competitor, Bootstrapped DQN, on hard tabular exploration benchmarks. Expand
Maximum Margin Bayesian Network Classifiers
TLDR
It is shown that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first. Expand
...
1
2
3
4
5
...