Corpus ID: 2153770

Multiple Instance Learning: Algorithms and Applications

@inproceedings{Babenko2008MultipleIL,
  title={Multiple Instance Learning: Algorithms and Applications},
  author={Boris Babenko},
  year={2008}
}
Traditional supervised learning requires a training data set that consists of inputs and corresponding labels. In many applications, however, it is difficult or even impossible to accurately and consistently assign labels to inputs. A relatively new learning paradigm called Multiple Instance Learning allows the training of a classifier from ambiguously labeled data. This paradigm has been receiving much attention in the last several years, and has many useful applications in a number of domains… Expand

Figures and Tables from this paper

Multiple instance learning: A survey of problem characteristics and applications
TLDR
A comprehensive survey of the characteristics which define and differentiate the types of MIL problems is provided, providing insight on how the problem characteristics affect MIL algorithms, recommendations for future benchmarking and promising avenues for research. Expand
An embarrassingly simple approach to neural multiple instance classification
TLDR
A simple bag-level ranking loss function that allows Multiple Instance Classification in any neural architecture and is comparable or better than existing methods in the literature in practical scenarios is proposed. Expand
Joint Clustering and Classification for Multiple Instance Learning
TLDR
A novel algorithm to estimate concepts and classifier parameters by jointly optimizing a classification loss is proposed, which discovers a small set of discriminative concepts, which yield superior classification performance. Expand
Multiple instance learning for visual recognition: Learning latent probabilistic models
TLDR
A boosting framework for MIL is proposed, which can model a wide range of soft and linguistic cardinality relations, and a probabilistic graphical model is proposed to capture the interactions and interrelations between instances, instance labels, and the whole bag. Expand
Multiple instance learning under real-world conditions
TLDR
This thesis proposes a method for bag classification which relies on the identification of positive instances to train an ensemble of instance classifiers and proposes a bag classification method that learns under these conditions. Expand
A multi-instance learning algorithm based on a stacked ensemble of lazy learners
TLDR
The algorithm described here is an ensemble-based method, wherein the members of the ensemble are lazy learning classifiers learnt using the Citation Nearest Neighbour method, with the objectives being to maximize Class 1 accuracy and minimize false positive rate. Expand
Semi-Supervised Multiple Instance Learning and its application in visual tracking
  • Yu Zhou, Anlong Ming
  • Computer Science
  • 2016 8th International Conference on Wireless Communications & Signal Processing (WCSP)
  • 2016
TLDR
A kind of “bag of instances” representation in the semi-supervised learning process, which provides an effective way to use the unlabeled data in multiple instance learning problem with a graph model based on the Minimax kernel is presented. Expand
Generalized Dictionaries for Multiple Instance Learning
TLDR
This work presents a multi-class multiple instance learning (MIL) algorithm using the dictionary learning framework where the data is given in the form of bags and proposes a noisy-OR model and a generalized mean-based optimization framework for learning the dictionaries in the feature space. Expand
Multiple-Instance Learning for Medical Image and Video Analysis
TLDR
This meta-analysis shows that, besides being more convenient than SIL solutions, MIL algorithms are also more accurate in many cases, in other words, MIL is the ideal solution for many MIVA tasks. Expand
mil-benchmarks: Standardized Evaluation of Deep Multiple-Instance Learning Techniques
TLDR
A series of multiple-instance learning benchmarks generated from MNIST, Fashion-MNIST, and CIFAR10 are introduced and the Noisy-And method with label noise is evaluated to find mixed results with different datasets. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 37 REFERENCES
A Framework for Multiple-Instance Learning
TLDR
A new general framework, called Diverse Density, is described, which is applied to learn a simple description of a person from a series of images containing that person, to a stock selection problem, and to the drug activity prediction problem. Expand
Solving the Multiple-Instance Problem: A Lazy Learning Approach
TLDR
This paper investigates the use of lazy learning and Hausdorff distance to approach the multiple-instance problem, and presents two variants of the K-nearest neighbor algorithm, called Bayesian-KNN and Citation- KNN, solving themultiple- instance problem. Expand
Supervised versus multiple instance learning: an empirical comparison
TLDR
This work empirically study the relationship between supervised and multiple instance (MI) learning by looking at a cross section of MI data sets from various domains coupled with a number of learning algorithms including Diverse Density, Logistic Regression, nonlinear Support Vector Machines and FOIL. Expand
Multi-Instance Multi-Label Learning with Application to Scene Classification
TLDR
This paper formalizes multi-instance multi-label learning, where each training example is associated with not only multiple instances but also multiple class labels, and proposes the MIMLBOOST and MIMLSVM algorithms which achieve good performance in an application to scene classification. Expand
Image Categorization by Learning and Reasoning with Regions
TLDR
This paper presents a new learning technique, which extends Multiple-Instance Learning (MIL), and its application to the problem of region-based image categorization, and provides experimental results on an image categorizing problem and a drug activity prediction problem. Expand
Multiple instance learning for sparse positive bags
TLDR
This work presents a new approach to multiple instance learning (MIL) that is particularly effective when the positive bags are sparse (i.e. contain few positive instances) and is the best performing method for image region classification. Expand
Multiple Instance Boosting for Object Detection
TLDR
MILBoost adapts the feature selection criterion of MILBoost to optimize the performance of the Viola-Jones cascade to show the advantage of simultaneously learning the locations and scales of the objects in the training set along with the parameters of the classifier. Expand
Multiple instance learning with generalized support vector machines
TLDR
Multiple-Instance Learning (MIL) generalizes this problem setting by making weaker assumptions about the labeling information, while each pattern is still believed to possess a true label, training labels are associated with sets or bags of patterns rather than individual patterns. Expand
Solving the Multiple Instance Problem with Axis-Parallel Rectangles
TLDR
Three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem are described and compared, giving 89% correct predictions on a musk odor prediction task. Expand
A Note on Learning from Multiple-Instance Examples
TLDR
All concept classes learnable with one-sided noise, which includes all concepts learnable in the usual 2-sided random noise model plus others such as the parity function, are learnable from multiple-instance examples. Expand
...
1
2
3
4
...