Discriminative and consistent similarities in instance-level Multiple Instance Learning

@article{Rastegari2015DiscriminativeAC,
  title={Discriminative and consistent similarities in instance-level Multiple Instance Learning},
  author={Mohammad Rastegari and Hannaneh Hajishirzi and Ali Farhadi},
  journal={2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2015},
  pages={740-748}
}
In this paper we present a bottom-up method to instance-level Multiple Instance Learning (MIL) that learns to discover positive instances with globally constrained reasoning about local pairwise similarities. We discover positive instances by optimizing for a ranking such that positive (top rank) instances are highly and consistently similar to each other and dissimilar to negative instances. Our approach takes advantage of a discriminative notion of pairwise similarity coupled with a… Expand
PIGMIL: Positive Instance Detection via Graph Updating for Multiple Instance Learning
TLDR
A positive instance detection via graph updating for multiple instance learning, called PIGMIL, is proposed, to detect TPI accurately and its excellent performance compared to classic baseline MIL methods is demonstrated. Expand
Sparse multiple instance learning as document classification
TLDR
The proposed methods achieve significantly higher accuracies and AUC than the state-of-the-art in a large number of sparse MIL problems, and the document classification analogy explains their efficacy in sparse Mil problems. Expand
Clustering-based multiple instance learning with multi-view feature
TLDR
This paper proposes a similarity-based method with clustering in a multi-view feature manner to solve MIL problems efficiently and demonstrates the effectiveness of the clustering-based MIL (CMIL) model. Expand
Modeling and Optimization of Classifiers with Latent Variables
TLDR
A new optimization framework, called Generalized MajorizationMinimization (G-MM), is introduced that extends existing approaches to non-convex optimization such as Expectation Maximization and Concave Convex Procedure (CCCP), and does not require bounds to be tight, making it very flexible. Expand
SALE: Self-adaptive LSH encoding for multi-instance learning
TLDR
A self-adaptive LSH encoding method for MIL, termed as SALE, which efficiently deals with large MIL problems, due to its low complexity and RSH’s ability to exploit key information of MIL. Expand
Generalized Majorization-Minimization
TLDR
This work derives G-MM algorithms for several latent variable models and shows empirically that they consistently outperform their MM counterparts in optimizing non-convex objectives, and appears to be less sensitive to initialization. Expand
Aligning Sentences from Standard Wikipedia to Simple Wikipedia
TLDR
This work improves monolingual sentence alignment for text simplification, specifically for text in standard and simple Wikipedia by using a greedy search over the document and a word-level semantic similarity score based on Wiktionary that also accounts for structural similarity through syntactic dependencies. Expand
cvpaper.challenge in CVPR2015 -- A review of CVPR2015
The “cvpaper.challenge” is focusing on reading top conference papers in the fields of computer vision, image processing, pattern recognition and machine learning. In this challenge, we simultaneouslyExpand
cvpaper.challenge in 2016: Futuristic Computer Vision through 1, 600 Papers Survey
The paper gives futuristic challenges disscussed in the cvpaper.challenge. In 2015 and 2016, we thoroughly study 1,600+ papers in several conferences/journals such as CVPR/ICCV/ECCV/NIPS/PAMI/IJCV.

References

SHOWING 1-10 OF 53 REFERENCES
SMILE: A Similarity-Based Approach for Multiple Instance Learning
TLDR
A novel MIL method named SMILE (Similarity-based Multiple Instance LEarning), which introduces a similarity weight to each instance in positive bag, which represents the instance similarity towards the positive and negative classes. Expand
Instance-level Semisupervised Multiple Instance Learning
TLDR
This paper proposes a new graph-based semi-supervised learning approach for multiple instance learning by defining an instance-level graph on the data, and empirically shows that this method outperforms state-of-the-art MIL algorithms on several real-world data sets. Expand
MILES: Multiple-Instance Learning via Embedded Instance Selection
TLDR
This work proposes a learning method, MILES (multiple-instance learning via embedded instance selection), which converts the multiple- instance learning problem to a standard supervised learning problem that does not impose the assumption relating instance labels to bag labels. Expand
Multi-instance learning by treating instances as non-I.I.D. samples
TLDR
This paper explicitly map every bag to an undirected graph and design a graph kernel for distinguishing the positive and negative bags and implicitly construct graphs by deriving affinity matrices and propose an efficient graph kernel considering the clique information. Expand
Adaptive p-posterior mixture-model kernels for multiple instance learning
TLDR
This paper proposes an adaptive framework for MIL that adapts to different application domains by learning the domain-specific mechanisms merely from labeled bags, especially attractive when the instances are encountered with novel application domains, for which the mechanisms may be different and unknown. Expand
Multiple Instance Learning by Discriminative Training of Markov Networks
TLDR
Experimental results verify that encoding or learning the degree of ambiguity can improve classification performance and a discriminative max-margin learning algorithm leveraging efficient inference for cardinality-based cliques is proposed. Expand
Convex Multiple-Instance Learning by Estimating Likelihood Ratio
TLDR
An approach to multiple-instance learning is proposed that reformulates the problem as a convex optimization on the likelihood ratio between the positive and the negative class for each training instance, and shows that likelihood ratio estimation is generally a good surrogate for the 0-1 loss. Expand
FAMER: Making Multi-Instance Learning Better and Faster
TLDR
FAMER constructs a Locally Sensitive Hashing based similarity measure for multi-instance framework, and represents each bag as a histogram by embedding instances within the bag into an auxiliary space, which captures the correspondence information between two bags. Expand
A Conditional Random Field for Multiple-Instance Learning
TLDR
MI-CRF models bags as nodes in a CRF with instances as their states and combines discriminative unary instance classifiers and pairwise dissimilarity measures to improve the classification performance. Expand
A Framework for Multiple-Instance Learning
TLDR
A new general framework, called Diverse Density, is described, which is applied to learn a simple description of a person from a series of images containing that person, to a stock selection problem, and to the drug activity prediction problem. Expand
...
1
2
3
4
5
...