DeepAL: Deep Active Learning in Python
@inproceedings{Huang2021DeepALDA, title={DeepAL: Deep Active Learning in Python}, author={Kuan-Hao Huang}, year={2021} }
We present DeepAL, a Python library that implements several common strategies for active learning, with a particular emphasis on deep active learning. DeepAL provides a simple and unified framework based on PyTorch that allows users to easily load custom datasets, build custom data handlers, and design custom strategies without much modification of codes. DeepAL is open-source on Github and welcome any contribution.
6 Citations
Learn, Unlearn and Relearn: An Online Learning Paradigm for Deep Neural Networks
- Computer ScienceArXiv
- 2023
This work introduces Learn, Unlearn, and Relearn (LURE) an online learning paradigm for DNNs that interchanges between the unlearning phase, which selectively forgets the undesirable information in the model through weight reinitialization in a data-dependent manner, and the relearningphase, which emphasizes learning on generalizable features.
A Comparative Survey of Deep Active Learning
- Computer ScienceArXiv
- 2022
A DAL toolkit is constructed, DeepAL + , by re-implementing many highly-cited DAL-related methods, and it will be released to the public.
Active-Learning-as-a-Service: An Efficient MLOps System for Data-Centric AI
- Computer ScienceArXiv
- 2022
An efficient MLOps system for AL, named ALaaS (Active-Learning-as-a-Service), which abstracts an AL process to several components and provides rich APIs for advanced users to extend the system to new scenarios.
Active-Learning-as-a-Service: An Automatic and Efficient MLOps System for Data-Centric AI
- Computer Science
- 2022
An automatic and efficient MLOps system for AL, named ALaaS (Active-Learning-as-a-Service), which can automatically select and run AL strategies for non-expert users under different datasets and budgets.
Disambiguation of Company names via Deep Recurrent Networks
- Computer ScienceArXiv
- 2023
This work proposes a Siamese LSTM Network approach to extract an embedding of company name strings in a (relatively) low dimensional vector space and use this representation to identify pairs of company names that actually represent the same company (i.e. the same Entity).
Pareto Optimization for Active Learning under Out-of-Distribution Data Scenarios
- Computer ScienceArXiv
- 2022
A sampling scheme, Monte-Carlo Pareto Optimization for Active Learning ( POAL), which selects optimal subsets of unlabeled samples with fixed batch size from the unlabeling data pool is proposed.
References
SHOWING 1-10 OF 13 REFERENCES
libact: Pool-based Active Learning in Python
- Computer ScienceArXiv
- 2017
libact is a Python package that implements several popular active learning strategies, but also features the active-learning-by-learning meta-algorithm that assists the users to automatically select the best strategy on the fly.
Deep Bayesian Active Learning with Image Data
- Computer ScienceICML
- 2017
This paper develops an active learning framework for high dimensional data, a task which has been extremely challenging so far, with very sparse existing literature, and demonstrates its active learning techniques with image data, obtaining a significant improvement on existing active learning approaches.
Active Learning for Convolutional Neural Networks: A Core-Set Approach
- Computer ScienceICLR
- 2018
This work defines the problem of active learning as core-set selection as choosing set of points such that a model learned over the selected subset is competitive for the remaining data points, and presents a theoretical result characterizing the performance of any selected subset using the geometry of the datapoints.
Adversarial Active Learning for Deep Networks: a Margin Based Approach
- Computer ScienceArXiv
- 2018
It is demonstrated empirically that adversarial active queries yield faster convergence of CNNs trained on MNIST, the Shoe-Bag and the Quick-Draw datasets.
JCLAL: A Java Framework for Active Learning
- Computer ScienceJ. Mach. Learn. Res.
- 2016
JCLAL is a Java Class Library for Active Learning which has an architecture that follows strong principles of object-oriented design, and it allows the developers to adapt, modify and extend the framework according to their needs.
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust.
Active Learning Literature Survey
- Computer Science
- 2009
This report provides a general introduction to active learning and a survey of the literature, including a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date.
A sequential algorithm for training text classifiers
- Computer ScienceSIGIR '94
- 1994
An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task and reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.
Adversarial examples in the physical world
- Computer ScienceICLR
- 2017
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Support-Vector Networks
- Computer ScienceMachine Learning
- 2004
High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated and the performance of the support- vector network is compared to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.