• Corpus ID: 211043971

Bridging Ordinary-Label Learning and Complementary-Label Learning

@inproceedings{Katsura2020BridgingOL,
  title={Bridging Ordinary-Label Learning and Complementary-Label Learning},
  author={Yasuhiro Katsura and Masato Uchida},
  booktitle={ACML},
  year={2020}
}
A supervised learning framework has been proposed for the situation where each training data is provided with a complementary label that represents a class to which the pattern does not belong. In the existing literature, complementary-label learning has been studied independently from ordinary-label learning, which assumes that each training data is provided with a label representing the class to which the pattern belongs. However, providing a complementary label should be treated as… 

Figures and Tables from this paper

Learning from Similarity-Confidence Data
TLDR
This paper proposes an unbiased estimator of the classification risk that can be calculated from only Sconf data and shows that the estimation error bound achieves the optimal convergence rate.
Tsallis Entropy Based Labelling
  • Kentaro Goto, M. Uchida
  • Computer Science
    2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)
  • 2020
TLDR
An annotation framework is proposed; Tsallis entropy based labelling, which models a method that dynamically selects the number of labels for every single given instance depending on the uncertainty regarding the class to which each instance belongs.
Invariance Learning based on Label Hierarchy
TLDR
This work estimates an invariant predictor for the target classification task with training data in a single domain and proposes two cross-validation methods for selecting hyperparameters of invariance regularization to solve the issue of hyperparameter selection, which has not been handled properly in existing IL methods.
Multi-Class Classification from Single-Class Data with Confidences
TLDR
An empirical risk minimization framework that is loss-/model-/optimizer-independent and can conduct discriminative classification between all the classes even if no data from the other classes are provided is proposed.

References

SHOWING 1-10 OF 30 REFERENCES
Complementary-Label Learning for Arbitrary Losses and Models
TLDR
The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal.
Learning from Complementary Labels
TLDR
This paper shows that an unbiased estimator to the classification risk can be obtained only from complementarily labeled data, if a loss function satisfies a particular symmetric condition, and derives estimation error bounds and proves that the optimal parametric convergence rate is achieved.
Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models
Learning from Partial Labels
TLDR
This work proposes a convex learning formulation based on minimization of a loss function appropriate for the partial label setting, and analyzes the conditions under which this loss function is asymptotically consistent, as well as its generalization and transductive performance.
Learning from Candidate Labeling Sets
TLDR
A semi-supervised framework to model this kind of problems, where each training sample is a bag containing multi-instances, associated with a set of candidate labeling vectors, and the use of the labeling vectors provides a principled way not to exclude any information.
NLNL: Negative Learning for Noisy Labels
TLDR
This work uses an indirect learning method called Negative Learning (NL), in which the CNNs are trained using a complementary label as in ``input image does not belong to this complementary label.
Semi-supervised Learning by Entropy Minimization
TLDR
This framework, which motivates minimum entropy regularization, enables to incorporate unlabeled data in the standard supervised learning, and includes other approaches to the semi-supervised problem as particular or limiting cases.
A Conditional Multinomial Mixture Model for Superset Label Learning
TLDR
A probabilistic model, the Logistic Stick-Breaking Conditional Multinomial Model (LSB-CMM), is proposed, derived from the logistic stick-breaking process, to solve the superset label learning problem by maximizing the likelihood of the candidate label sets of training instances.
Binary Classification from Positive-Confidence Data
TLDR
It is shown that if one can equip positive data with confidence (positive-confidence), one can successfully learn a binary classifier, which is named positive-confidence (Pconf) classification.
Proper losses for learning from partial labels
TLDR
The concept of proper loss is generalized to this scenario, a necessary and sufficient condition for a loss function to be proper is established, and a direct procedure is shown to construct a proper loss for partial labels from a conventional proper loss.
...
1
2
3
...