• Corpus ID: 246294455

Model Agnostic Interpretability for Multiple Instance Learning

  title={Model Agnostic Interpretability for Multiple Instance Learning},
  author={Joseph Early and Christine Evers and Sarvapali D. Ramchurn},
In Multiple Instance Learning (MIL), models are trained using bags of instances, where only a single label is provided for each bag. A bag label is often only determined by a handful of key instances within a bag, making it difficult to interpret what information a classifier is using to make decisions. In this work, we establish the key requirements for interpreting MIL models. We then go on to develop several model-agnostic approaches that meet these requirements. Our methods are compared… 
Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning
This work extends RM to include hidden state information that captures temporal dependencies in human assessment of trajectories, and shows how RM can be approached as a multiple instance learning (MIL) problem, and develops new MIL models that are able to capture the time dependencies in labelled trajectories.


Multiple instance learning: A survey of problem characteristics and applications
Attention-based Deep Multiple Instance Learning
This paper proposes a neural network-based permutation-invariant aggregation operator that corresponds to the attention mechanism that achieves comparable performance to the best MIL methods on benchmark MIL datasets and outperforms other methods on a MNIST-based MIL dataset and two real-life histopathology datasets without sacrificing interpretability.
Revisiting multiple instance neural networks
Multiple instance learning with graph neural networks
This paper proposes a new end-to-end graph neural network (GNN) based algorithm for MIL that treats each bag as a graph and uses GNN to learn the bag embedding, in order to explore the useful structural information among instances in bags.
Solving the Multiple Instance Problem with Axis-Parallel Rectangles
Nested Multiple Instance Learning with Attention Mechanisms
A Nested Multiple Instance with Attention (NMIA) model architecture is proposed combining the concept of nesting with attention mechanisms and it is shown that NMIA performs as conventional MIL in simple scenarios and can grasp a complex scenario providing insights to the latent labels at different levels.
Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
A version of KernelSHAP for stochastic cooperative games that yields fast new estimators for two global explanation methods and a variance reduction technique that further accelerates the convergence of both estimators.
In Defense of LSTMs for Addressing Multiple Instance Learning Problems
Empirical evaluation of LSTMs on both simplified and realistic datasets shows that they are competitive with or even surpass state-of-the-art methods specially designed for handling specific MIL problems and that their performance on instance-level prediction is close to that of fully-supervised methods.
Interpretable Machine Learning
This project introduces Robust T CAV, which builds on TCAV and experimentally determines best practices for this method and is a step in the direction of making TCAVs, an already impactful algorithm in interpretability, more reliable and useful for practitioners.
A Value for n-person Games
Introduction At the foundation of the theory of games is the assumption that the players of a game can evaluate, in their utility scales, every “prospect” that might arise as a result of a play. In