Adversarial Knowledge Transfer from Unlabeled Data

  title={Adversarial Knowledge Transfer from Unlabeled Data},
  author={Akash Gupta and R. Panda and S. Paul and Jianming Zhang and A. Roy-Chowdhury},
  journal={Proceedings of the 28th ACM International Conference on Multimedia},
While machine learning approaches to visual recognition offer great promise, most of the existing methods rely heavily on the availability of large quantities of labeled training data. However, in the vast majority of real-world settings, manually collecting such large labeled datasets is infeasible due to the cost of labeling data or the paucity of data in a given domain. In this paper, we present a novel Adversarial Knowledge Transfer (AKT) framework for transferring knowledge from internet… Expand


Domain-Adversarial Training of Neural Networks
A new representation learning approach for domain adaptation, in which data at training and test time come from similar but different distributions, which can be achieved in almost any feed-forward model by augmenting it with few standard layers and a new gradient reversal layer. Expand
Open Set Domain Adaptation by Backpropagation
This paper proposes a method for an open set domain adaptation scenario, which utilizes adversarial training, and allows to extract features that separate unknown target from known target samples. Expand
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
A new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input that achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10. Expand
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
An unsupervised loss function is proposed that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. Expand
Robust and Discriminative Self-Taught Learning
This work proposes a novel robust and discriminative self-taught learning approach to utilize any unlabeled data without the above restrictions and derives an efficient iterative algorithm to solve the optimization problem and rigorously prove its convergence. Expand
A New Benchmark for Evaluation of Cross-Domain Few-Shot Learning
The proposed CD-FSL benchmark is proposed, consisting of images from diverse domains with varying similarity to ImageNet, ranging from crop disease images, satellite images, and medical images, to serve as a challenging platform to guide future research on cross-domain few-shot learning due to its spectrum of diversity and coverage. Expand
Self-taught learning: transfer learning from unlabeled data
An approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data to form a succinct input representation and significantly improve classification performance. Expand
Unsupervised Domain Adaptation with Residual Transfer Networks
Empirical evidence shows that the new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeledData in the target domain outperforms state of the art methods on standard domain adaptation benchmarks. Expand
A Unified Framework for Metric Transfer Learning
A metric transfer learning framework (MTF) is proposed to encode metric learning in transfer learning to make knowledge transfer across domains more effective and develops general solutions to both classification and regression problems on top of MTLF. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand