Corpus ID: 236318238

MCDAL: Maximum Classifier Discrepancy for Active Learning

@article{Cho2021MCDALMC,
  title={MCDAL: Maximum Classifier Discrepancy for Active Learning},
  author={Jae Won Cho and Dong-Jin Kim and Yunjae Jung and In-So Kweon},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11049}
}
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition; however, GAN is usually known to suffer from instability and sensitivity to hyper-parameters. In contrast to these methods, we propose in this paper a novel active learning framework that we call Maximum Classifier Discrepancy for Active Learning (MCDAL) which takes the prediction discrepancies between multiple classifiers. In particular, we utilize two auxiliary… Expand

Figures and Tables from this paper

LabOR: Labeling Only if Required for Domain Adaptive Semantic Segmentation
TLDR
A Labeling Only if Required strategy, LabOR is proposed, where a human-in-the-loop approach is introduced to adaptively give scarce labels to points that a UDA model is uncertain about to achieve near supervised performance. Expand

References

SHOWING 1-10 OF 50 REFERENCES
Dual Adversarial Network for Deep Active Learning
TLDR
This paper investigates the overlapping problem of recent uncertainty-based approaches and proposes a dual adversarial network, namely DAAL, for this purpose, which learns to select the most uncertain and representative data points in one-stage. Expand
State-Relabeling Adversarial Active Learning
TLDR
This paper proposes a state relabeling adversarial active learning model (SRAAL), that leverages both the annotation and the labeled/unlabeled state information for deriving the most informative unlabeled samples. Expand
Adversarial Sampling for Active Learning
  • Christoph Mayer, R. Timofte
  • Computer Science, Mathematics
  • 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2020
TLDR
This paper proposes ASAL, a new GAN based active learning method that generates high entropy samples that outperforms similar methods and clearly exceeds the established baseline (random sampling). Expand
Agreement-Discrepancy-Selection: Active Learning with Progressive Distribution Alignment
TLDR
This paper proposes an agreement-discrepancy-selection approach, and target at unifying distribution alignment with sample selection by introducing adversarial classifiers to the convolutional neural network (CNN). Expand
The Power of Ensembles for Active Learning in Image Classification
TLDR
It is found that ensembles perform better and lead to more calibrated predictive uncertainties, which are the basis for many active learning algorithms, and Monte-Carlo Dropout uncertainties perform worse. Expand
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task. Expand
Cost-Effective Active Learning for Deep Image Classification
TLDR
This paper proposes a novel active learning (AL) framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner and incorporates deep convolutional neural networks into AL. Expand
Variational Adversarial Active Learning
TLDR
A pool-based semi-supervised active learning algorithm that implicitly learns this sampling mechanism in an adversarial manner that learns an effective low dimensional latent space in large-scale settings and provides for a computationally efficient sampling method. Expand
Contextual Diversity for Active Learning
TLDR
The notion of contextual diversity that captures the confusion associated with spatially co-occurring classes is introduced and state of the art results for active learning on benchmark datasets of Semantic Segmentation, Object Detection and Image Classification are established. Expand
Learning Loss for Active Learning
  • Donggeun Yoo, I. Kweon
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
A novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks, by attaching a small parametric module, named ``loss prediction module,'' to a target network, and learning it to predict target losses of unlabeled inputs. Expand
...
1
2
3
4
5
...