Superset Learning Based on Generalized Loss Minimization

@inproceedings{Hllermeier2015SupersetLB,
  title={Superset Learning Based on Generalized Loss Minimization},
  author={Eyke H{\"u}llermeier and Weiwei Cheng},
  booktitle={ECML/PKDD},
  year={2015}
}
In standard supervised learning, each training instance is associated with an outcome from a corresponding output space (e.g., a class label in classification or a real number in regression). In the superset learning problem, the outcome is only characterized in terms of a superset—a subset of candidates that covers the true outcome but may also contain additional ones. Thus, superset learning can be seen as a specific type of weakly supervised learning, in which training examples are ambiguous… 
A Regularization Approach for Instance-Based Superset Label Learning
TLDR
A novel regularization approach for instance-based superset label (RegISL) learning is developed so that the instance- based method also inherits the good discriminative ability possessed by the regularization scheme.
Feature Reduction in Superset Learning Using Rough Sets and Evidence Theory
TLDR
This article considers the problem of feature reduction as a mean for data disambiguation, i.e., for the purpose of figuring out the most plausible precise instantiation of the imprecise training data, using information-theoretic techniques based on evidence theory.
Credal Self-Supervised Learning
TLDR
The key idea is to let the learner itself iteratively generate “pseudo-supervision” for unlabeled instances based on its current hypothesis, and to learn from weakly labeled data of that kind, the authors leverage methods that have recently been proposed in the realm of so-called superset learning.
Partial Label Learning with Self-Guided Retraining
TLDR
This paper provides the first attempt to leverage the idea of self-training for dealing with partially labeled examples by proposing a unified formulation with proper constraints to train the desired model and perform pseudo-labeling jointly.
Instance weighting through data imprecisiation
Structured Prediction with Partial Labelling through the Infimum Loss
TLDR
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling over a wide family of learning problems and loss functions and leads naturally to explicit algorithms that can be easily implemented and which proved statistical consistency and learning rates.
Learning with Noisy Partial Labels by Simultaneously Leveraging Global and Local Consistencies
TLDR
A novel PL method, namely PArtial label learNing by simultaneously leveraging GlObal and Local consIsteNcies (Pangolin), which designs a global consistency regularization term to pull instances associated with similar labeling confidences together by minimizing the distances between instances and label prototypes, and a local consistency term to push instances marked with no same candidate labels away by maximizing their distances.
Large Margin Partial Label Machine
TLDR
An LM Partial LAbel machiNE (LM-PLANE) is proposed by extending multi-class support vector machines (SVM) to PLL by solving the main challenge of PLL, where each training instance is associated with a set of candidate labels but only one label is the ground truth.
Dyad Ranking Using a Bilinear Plackett-Luce Model
TLDR
This paper proposes an extension of an existing label ranking method based on the Plackett-Luce model, a statistical model for rank data, and presents first experimental results confirming the usefulness of the additional information provided by the feature description of alternatives.
Detecting the Fake Candidate Instances: Ambiguous Label Learning with Generative Adversarial Networks
TLDR
A novel ALL method, namely Adversarial Ambiguous Label Learning with Candidate Instance Detection (A2L2CID), which outperforms the state-of-the-art ALL methods and analyzes that there is a global equilibrium point between the three players.
...
...

References

SHOWING 1-10 OF 20 REFERENCES
Learning from Partial Labels
TLDR
This work proposes a convex learning formulation based on minimization of a loss function appropriate for the partial label setting, and analyzes the conditions under which this loss function is asymptotically consistent, as well as its generalization and transductive performance.
Learning from ambiguously labeled examples
TLDR
This paper is concerned with ambiguous label classification (ALC), an extension of this setting in which several candidate labels may be assigned to a single example, and shows that appropriately designed learning algorithms can successfully exploit the information contained in ambiguously labeled examples.
A Conditional Multinomial Mixture Model for Superset Label Learning
TLDR
A probabilistic model, the Logistic Stick-Breaking Conditional Multinomial Model (LSB-CMM), is proposed, derived from the logistic stick-breaking process, to solve the superset label learning problem by maximizing the likelihood of the candidate label sets of training instances.
Learning with Multiple Labels
TLDR
This paper proposes a novel discriminative approach for handling the ambiguity of class labels in the training examples and shows that the approach is able to find the correct label among the set of candidate labels and actually achieve performance close to the case when each training instance is given a single correct label.
Learnability of the Superset Label Learning Problem
TLDR
Empirical Risk Minimizing learners that use the superset error as the empirical risk measure are analyzed and the conditions for ERM learnability and sample complexity for the realizable case are given.
Label ranking by learning pairwise preferences
Learning from imprecise and fuzzy observations: Data disambiguation through generalized loss minimization
Decision tree and instance-based learning for label ranking
TLDR
New methods for label ranking are introduced that complement and improve upon existing approaches and are extensions of two methods that have been used extensively for classification and regression so far, namely instance-based learning and decision tree induction.
A Taxonomy of Label Ranking Algorithms
TLDR
This paper gives an overview of the state-of-the-art in the area of label ranking, and provides a basic taxonomy of the label ranking algorithms, namely decomposition methods, probabilistic methods, similarity-based methods, and other methods.
Multi-Label Learning with Weak Label
TLDR
The WELL (WEak Label Learning) method is proposed, which considers that the classification boundary for each label should go across low density regions, and that each label generally has much smaller number of positive examples than negative examples.
...
...