RIKEN Center for Advanced Intelligence Project
Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Co-teaching: Robust training of deep neural networks with extremely noisy labels
Empirical results on noisy versions of MNIST, CIFar-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.
Positive-Unlabeled Learning with Non-Negative Risk Estimator
This paper proposes a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus it is able to use very flexible models (such as deep neural networks) given limited P data.
How does Disagreement Help Generalization against Label Corruption?
- Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, I. Tsang, Masashi Sugiyama
- Computer ScienceICML
- 14 January 2019
A robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-Teaching, which is much superior to many state-of-the-art methods in the robustness of trained models.
Convex Formulation for Learning from Positive and Unlabeled Data
This paper proposes a convex formulation for PU classification that can still cancel the bias, and proves that the estimators converge to the optimal solutions at the optimal parametric rate.
Analysis of Learning from Positive and Unlabeled Data
This paper first shows that this problem can be solved by cost-sensitive learning between positive and unlabeled data, and shows that convex surrogate loss functions such as the hinge loss may lead to a wrong classification boundary due to an intrinsic bias, but this can be avoided by using non-convex loss functionssuch as the ramp loss.
Does Distributionally Robust Supervised Learning Give Robust Classifiers?
This paper proves that the DRSL just ends up giving a classifier that exactly fits the given training distribution, which is too pessimistic, and proposes simple D RSL that overcomes this pessimism and empirically demonstrate its effectiveness.
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
A novel approach of friendly adversarial training (FAT) is proposed: rather than employing most adversarial data maximizing the loss, it is proposed to search for least adversarial Data Minimizing the Loss, among the adversarialData that are confidently misclassified.
Learning from Complementary Labels
This paper shows that an unbiased estimator to the classification risk can be obtained only from complementarily labeled data, if a loss function satisfies a particular symmetric condition, and derives estimation error bounds and proves that the optimal parametric convergence rate is achieved.
Masking: A New Perspective of Noisy Supervision
A human-assisted approach called Masking is proposed that conveys human cognition of invalid class transitions and naturally speculates the structure of the noise transition matrix and can improve the robustness of classifiers significantly.
Are Anchor Points Really Indispensable in Label-Noise Learning?
Empirical results on benchmark-simulated and real-world label-noise datasets demonstrate that without using exact anchor points, the proposed method is superior to the state-of-the-art label- noise learning methods.