Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Towards A Rigorous Science of Interpretable Machine Learning
This position paper defines interpretability and describes when interpretability is needed (and when it is not), and suggests a taxonomy for rigorous evaluation and exposes open questions towards a more rigorous science of interpretable machine learning.
Variational Inference for the Indian Buffet Process
A deterministic variational method for inference in the IBP based on a truncated stick-breaking approximation is developed, theoretical bounds on the truncation error are provided, and the method is evaluated in several data regimes.
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
This work introduces a method for efficiently explaining and regularizing differentiable models by examining and selectively penalizing their input gradients, which provide a normal to the decision boundary.
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
It is demonstrated that regularizing input gradients makes them more naturally interpretable as rationales for model predictions, and also exhibits robustness to transferred adversarial examples generated to fool all of the other models.
Accelerated sampling for the Indian Buffet Process
This work presents a new linear-time collapsed Gibbs sampler for conjugate likelihood models and demonstrates its efficacy on large real-world datasets.
Unfolding physiological state: mortality modelling in intensive care units
This work examined the use of latent variable models to decompose free-text hospital notes into meaningful features, and found that latent topic-derived features were effective in determining patient mortality under three timelines: in-hospital, 30 day post- Discharge, and 1 year post-discharge mortality.
A Bayesian Framework for Learning Rule Sets for Interpretable Classification
- Tong Wang, C. Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, P. MacNeille
- Computer ScienceJ. Mach. Learn. Res.
The method (Bayesian Rule Sets - BRS) is applied to characterize and predict user behavior with respect to in-vehicle context-aware personalized recommender systems and has a major advantage over classical associative classification methods and decision trees.
The Infinite Partially Observable Markov Decision Process
- Finale Doshi-Velez
- Computer Science, MathematicsNIPS
- 7 December 2009
An infinite POMDP (iPOMDP) model is defined that does not require knowledge of the size of the state space and assumes that the number of visited states will grow as the agent explores its world and only models visited states explicitly.
Accountability of AI Under the Law: The Role of Explanation
- Finale Doshi-Velez, Mason Kortz, +7 authors Alexandra Wood
- Computer Science, MathematicsArXiv
- 3 November 2017
Contrary to popular wisdom of AI systems as indecipherable black boxes, it is found that this level of explanation should often be technically feasible but may sometimes be practically onerous; in the future, AI systems can and should be held to a similar standard of explanation as humans currently are.
Representation Balancing MDPs for Off-Policy Policy Evaluation
A new finite sample generalization error bound for value estimates from MDP models is proposed, and a learning algorithm of an MDP model with a balanced representation is developed that can yield substantially lower MSE in common synthetic benchmarks and a HIV treatment simulation domain.