Robust Visual Knowledge Transfer via EDA

Abstract

We address the problem of visual knowledge adaptation by leveraging labeled patterns from the source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. We introduce a new semi-supervised cross-domain network learning framework, referred to as Extreme Domain Adaptation (EDA), that allows us to simultaneously learn a category transformation and an extreme classifier by minimizing the , -norm of the output weights and the learning error, in which the network output weights can be analytically determined. The unlabeled target data, as useful knowledge, is also learned as a fidelity term by minimizing the matching error between the extreme classifier and a base classifier to guarantee the stability during cross domain learning, into which many existing classifiers can be readily incorporated as base classifiers. Additionally, a manifold regularization with Laplacian graph is incorporated into EDA, such that it is beneficial to semi-supervised learning. Under the EDA, we also propose an extensive model learned with multiple views. Experiments on three visual data sets for video event recognition and object recognition, respectively, demonstrate that our EDA outperforms existing cross-domain learning methods.

Extracted Key Phrases

12 Figures and Tables

Cite this paper

@article{Zhang2015RobustVK, title={Robust Visual Knowledge Transfer via EDA}, author={Lei Zhang and David Zhang}, journal={CoRR}, year={2015}, volume={abs/1505.04382} }