• Publications
  • Influence
Variational Inference for the Indian Buffet Process
TLDR
We develop a deterministic variational method for inference in the IBP based on a truncated stick-breaking approximation, provide theoretical bounds on the truncation error, and evaluate our method in several data regimes. Expand
  • 138
  • 39
  • PDF
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
TLDR
In this work, we evaluate the effectiveness of defenses that differentiably penalize the degree to which small changes in inputs can alter model predictions. Expand
  • 232
  • 18
  • PDF
Accelerated sampling for the Indian Buffet Process
TLDR
We often seek to identify co-occurring hidden features in a set of observations. Expand
  • 64
  • 17
  • PDF
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
TLDR
We introduce a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary. Expand
  • 186
  • 15
  • PDF
Unfolding physiological state: mortality modelling in intensive care units
TLDR
We examined the use of latent variable models (viz. Latent Dirichlet Allocation) to decompose free-text hospital notes into meaningful features, and the predictive power of these features for patient mortality. Expand
  • 165
  • 13
  • PDF
The Infinite Partially Observable Markov Decision Process
TLDR
We propose an infinite POMDP (iPOMDP) model that does not require knowledge of the size of the state space; instead, it assumes the number of visited states will grow as the agent explores its world and only models visited states explicitly. Expand
  • 95
  • 12
  • PDF
Accountability of AI Under the Law: The Role of Explanation
TLDR
The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. Expand
  • 146
  • 11
  • PDF
A Bayesian Framework for Learning Rule Sets for Interpretable Classification
TLDR
We present a machine learning algorithm for building classifiers that are comprised of a small number of short rules that concisely describe a specific class. Expand
  • 79
  • 9
  • PDF
Online Discovery of Feature Dependencies
TLDR
We introduce incremental Feature Dependency Discovery (iFDD) as a general, model-free representational learning algorithm that expands the initial representation by creating new features which are defined in low dimensional subspaces of the full state space. Expand
  • 59
  • 8
  • PDF
A Bayesian nonparametric approach to modeling motion patterns
TLDR
We propose modeling target motion patterns as a mixture of Gaussian processes (GP) with a Dirichlet process (DP) prior over mixture weights. Expand
  • 124
  • 7
  • PDF