A brief introduction to weakly supervised learning

@article{Zhou2018ABI,
  title={A brief introduction to weakly supervised learning},
  author={Z. Zhou},
  journal={National Science Review},
  year={2018},
  volume={5},
  pages={44-53}
}
  • Z. Zhou
  • Published 2018
  • Computer Science
  • National Science Review
Supervised learning techniques construct predictive models by learning from a large number of training examples, where each training example has a label indicating its ground-truth output. Though current techniques have achieved great success, it is noteworthy that in many tasks it is difficult to get strong supervision information like fully ground-truth labels due to the high cost of the data-labeling process. Thus, it is desirable for machine-learning techniques to work with weak supervision… Expand
Safe semi-supervised learning: a brief introduction
TLDR
This article reviews some research progress of safe semi-supervised learning, focusing on three types of safeness issue: data quality, where the training data is risky or of low-quality; model uncertainty,Where the learning algorithm fails to handle the uncertainty during training; measure diversity, whereThe safe performance could be adapted to diverse measures. Expand
Towards Safe Weakly Supervised Learning
TLDR
A generic ensemble learning scheme to derive a safe prediction by integrating multiple weakly supervised learners is presented, which optimize the worst-case performance gain and lead to a maximin optimization. Expand
Learning from Incomplete and Inaccurate Supervision
TLDR
This paper proposes a novel method which is able to effectively alleviate the negative influence of one-sided label noise with the help of a vast number of unlabeled data. Expand
Training image classifiers using Semi-Weak Label Data
TLDR
This paper proposes a novel semi-weak label learning paradigm that outperforms both baseline models for MIL-based weakly supervised setting and learning from proportion setting and gives comparable results compared to the fully supervised model. Expand
Learning from Indirect Observations
TLDR
A probabilistic framework, learning from indirect observations, for learning from a wide range of weak supervision in real-world problems, e.g., noisy labels, complementary labels and coarse-grained labels is presented. Expand
Active WeaSuL: Improving Weak Supervision with Active Learning
TLDR
A modification of the weak supervision loss function, such that the expert-labelled data inform and improve the combination of weak labels, and the maxKL divergence sampling strategy, which determines for which data points expert labelling is most beneficial, are made. Expand
WeakAL: Combining Active Learning and Weak Supervision
TLDR
This paper proposes WeakAL, which incorporates Weak Supervision (WS) techniques directly into the AL cycle, and investigates different WS strategies as well as different parameter combinations for a wide range of real-world datasets. Expand
A General Formulation for Safely Exploiting Weakly Supervised Data
TLDR
A scheme that builds the final prediction results by integrating several weakly supervised learners and brings two advantages: for the commonly used convex loss functions in both regression and classification tasks, safeness guarantees exist under a mild condition and the formulation can be addressed globally by simple convex quadratic or linear program efficiently. Expand
Weakly Supervised Learning Creates a Fusion of Modeling Cultures
TLDR
The key concepts in weakly supervised learning are summarized and some recent developments in the field are discussed and some promising direction would be integrating the data modeling culture into such a framework. Expand
Semi-Supervised Partial Label Learning via Confidence-Rated Margin Maximization
TLDR
This paper investigates the problem of semi-supervised partial label learning, where unlabeled data is utilized to facilitate model induction along with partial label training examples, and shows that the predictive model and labeling confidence can be solved via alternating optimization which admits QP solutions in either alternating step. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 106 REFERENCES
Semi-supervised learning by disagreement
  • Z. Zhou, M. Li
  • Computer Science
  • 2008 IEEE International Conference on Granular Computing
  • 2008
TLDR
An introduction to research advances in disagreement-based semi-supervised learning is provided, where multiple learners are trained for the task and the disagreements among the learners are exploited during the semi- supervised learning process. Expand
Convex and scalable weakly labeled SVMs
TLDR
This paper focuses on SVMs and proposes the WELLSVM via a novel label generation strategy, which leads to a convex relaxation of the original MIP, which is at least as tight as existing convex Semi-Definite Programming (SDP) relaxations. Expand
Towards Making Unlabeled Data Never Hurt
  • Yu-Feng Li, Z. Zhou
  • Computer Science, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2015
TLDR
It is here shown that S4VMs are provably safe and that the performance improvement using unlabeled data can be maximized, whereas in contrast to S3VMs which hurt performance significantly in many cases, S 4VMs rarely perform worse than inductive SVMs. Expand
Weak supervision and other non-standard classification problems: A taxonomy
TLDR
A taxonomy of weakly supervised classification problems that allows to understand similarities/differences among the different classification problems already presented in the literature as well as to discover unexplored frameworks that might be seen as further challenges and research opportunities. Expand
Learning From Crowds
TLDR
A probabilistic approach for supervised learning when the authors have multiple annotators providing (possibly noisy) labels but no absolute gold standard, and experimental results indicate that the proposed method is superior to the commonly used majority voting baseline. Expand
On the relation between multi-instance learning and semi-supervised learning
TLDR
The MissSVM algorithm is proposed which addresses multi- instance learning using a special semi-supervised support vector machine and is competitive with state-of-the-art multi-instance learning algorithms. Expand
MISSL: multiple-instance semi-supervised learning
TLDR
This work presents MISSL (Multiple-Instance Semi-Supervised Learning) that transforms any MI problem into an input for a graph-based single-instance semi-supervised learning method that encodes the MI aspects of the problem simultaneously working at both the bag and point levels. Expand
Learning with Local and Global Consistency
TLDR
A principled approach to semi-supervised learning is to design a classifying function which is sufficiently smooth with respect to the intrinsic structure collectively revealed by known labeled and unlabeled points. Expand
When semi-supervised learning meets ensemble learning
TLDR
This paper advocates that semi-supervised learning and ensemble learning are indeed beneficial to each other, and stronger learning machines can be generated by leveraging unlabeled data and classifier combination. Expand
Active Learning Literature Survey
TLDR
This report provides a general introduction to active learning and a survey of the literature, including a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date. Expand
...
1
2
3
4
5
...