Bag Graph: Multiple Instance Learning using Bayesian Graph Neural Networks

@inproceedings{Pal2022BagGM,
  title={Bag Graph: Multiple Instance Learning using Bayesian Graph Neural Networks},
  author={Soumyasundar Pal and Antonios Valkanas and Florence Regol and Mark Coates},
  booktitle={AAAI},
  year={2022}
}
Multiple Instance Learning (MIL) is a weakly supervised learning problem where the aim is to assign labels to sets or bags of instances, as opposed to traditional supervised learning where each instance is assumed to be independent and identically distributed (IID) and is to be labeled individually. Recent work has shown promising results for neural network models in the MIL setting. Instead of focusing on each instance, these models are trained in an end-to-end fashion to learn effective bag… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 34 REFERENCES

Multiple instance learning with graph neural networks

TLDR
This paper proposes a new end-to-end graph neural network (GNN) based algorithm for MIL that treats each bag as a graph and uses GNN to learn the bag embedding, in order to explore the useful structural information among instances in bags.

A Flexible Generative Framework for Graph-based Semi-supervised Learning

TLDR
This work proposes a flexible generative framework for graph-based semi-supervised learning, which approaches the joint distribution of the node features, labels, and the graph structure and exploits recent advances of scalable variational inference techniques to approximate the Bayesian posterior.

Bayesian graph convolutional neural networks for semi-supervised classification

TLDR
A Bayesian GCNN framework is presented and an iterative learning procedure for the case of assortative mixed-membership stochastic block models is developed, demonstrating that the Bayesian formulation can provide better performance when there are very few labels available during the training process.

Multi-instance learning by treating instances as non-I.I.D. samples

TLDR
This paper explicitly map every bag to an undirected graph and design a graph kernel for distinguishing the positive and negative bags and implicitly construct graphs by deriving affinity matrices and propose an efficient graph kernel considering the clique information.

Learning from Networks of Distributions

TLDR
The goal is to design a learning algorithm that takes into account the graph structure as well as the information from the distributions, so that the algorithm can perform a task such as regression or classification.

Multiple Instance Learning on Structured Data

TLDR
This paper explores the research problem as multiple instance learning on structured data (MILSD) and formulates a novel framework that considers additional structure information and has the nice convergence property, with specified precision on each set of constraints.

Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings

TLDR
The proposed framework can outperform state-of-the-art Bayesian and non-Bayesian graph neural network algorithms on the task of semi-supervised classification in the absence of graph data and when the network structure is subjected to adversarial perturbations.

GraphSAINT: Graph Sampling Based Inductive Learning Method

TLDR
GraphSAINT is proposed, a graph sampling based inductive learning method that improves training efficiency in a fundamentally different way and can decouple the sampling process from the forward and backward propagation of training, and extend GraphSAINT with other graph samplers and GCN variants.

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Revisiting multiple instance neural networks