# Bridging Ordinary-Label Learning and Complementary-Label Learning

@inproceedings{Katsura2020BridgingOL, title={Bridging Ordinary-Label Learning and Complementary-Label Learning}, author={Yasuhiro Katsura and Masato Uchida}, booktitle={ACML}, year={2020} }

A supervised learning framework has been proposed for the situation where each training data is provided with a complementary label that represents a class to which the pattern does not belong. In the existing literature, complementary-label learning has been studied independently from ordinary-label learning, which assumes that each training data is provided with a label representing the class to which the pattern belongs. However, providing a complementary label should be treated as…

## 4 Citations

Learning from Similarity-Confidence Data

- Computer ScienceICML
- 2021

This paper proposes an unbiased estimator of the classification risk that can be calculated from only Sconf data and shows that the estimation error bound achieves the optimal convergence rate.

Tsallis Entropy Based Labelling

- Computer Science2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)
- 2020

An annotation framework is proposed; Tsallis entropy based labelling, which models a method that dynamically selects the number of labels for every single given instance depending on the uncertainty regarding the class to which each instance belongs.

Invariance Learning based on Label Hierarchy

- Computer ScienceArXiv
- 2022

This work estimates an invariant predictor for the target classiﬁcation task with training data in a single domain and proposes two cross-validation methods for selecting hyperparameters of invariance regularization to solve the issue of hyperparameter selection, which has not been handled properly in existing IL methods.

Multi-Class Classification from Single-Class Data with Confidences

- Computer Science, MathematicsArXiv
- 2021

An empirical risk minimization framework that is loss-/model-/optimizer-independent and can conduct discriminative classification between all the classes even if no data from the other classes are provided is proposed.

## References

SHOWING 1-10 OF 30 REFERENCES

Complementary-Label Learning for Arbitrary Losses and Models

- Computer ScienceICML
- 2019

The goal of this paper is to derive a novel framework of complementary-label learning with an unbiased estimator of the classification risk, for arbitrary losses and models---all existing methods have failed to achieve this goal.

Learning from Complementary Labels

- Computer ScienceNIPS
- 2017

This paper shows that an unbiased estimator to the classification risk can be obtained only from complementarily labeled data, if a loss function satisfies a particular symmetric condition, and derives estimation error bounds and proves that the optimal parametric convergence rate is achieved.

Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models

- Computer SciencePattern Recognit.
- 2022

Learning from Partial Labels

- Computer ScienceJ. Mach. Learn. Res.
- 2011

This work proposes a convex learning formulation based on minimization of a loss function appropriate for the partial label setting, and analyzes the conditions under which this loss function is asymptotically consistent, as well as its generalization and transductive performance.

Learning from Candidate Labeling Sets

- Computer ScienceNIPS
- 2010

A semi-supervised framework to model this kind of problems, where each training sample is a bag containing multi-instances, associated with a set of candidate labeling vectors, and the use of the labeling vectors provides a principled way not to exclude any information.

NLNL: Negative Learning for Noisy Labels

- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019

This work uses an indirect learning method called Negative Learning (NL), in which the CNNs are trained using a complementary label as in ``input image does not belong to this complementary label.

Semi-supervised Learning by Entropy Minimization

- Computer ScienceCAP
- 2004

This framework, which motivates minimum entropy regularization, enables to incorporate unlabeled data in the standard supervised learning, and includes other approaches to the semi-supervised problem as particular or limiting cases.

A Conditional Multinomial Mixture Model for Superset Label Learning

- Computer ScienceNIPS
- 2012

A probabilistic model, the Logistic Stick-Breaking Conditional Multinomial Model (LSB-CMM), is proposed, derived from the logistic stick-breaking process, to solve the superset label learning problem by maximizing the likelihood of the candidate label sets of training instances.

Binary Classification from Positive-Confidence Data

- Computer ScienceNeurIPS
- 2018

It is shown that if one can equip positive data with confidence (positive-confidence), one can successfully learn a binary classifier, which is named positive-confidence (Pconf) classification.

Proper losses for learning from partial labels

- Computer Science, MathematicsNIPS
- 2012

The concept of proper loss is generalized to this scenario, a necessary and sufficient condition for a loss function to be proper is established, and a direct procedure is shown to construct a proper loss for partial labels from a conventional proper loss.