A Survey on Transfer Learning
@article{Pan2010ASO, title={A Survey on Transfer Learning}, author={Sinno Jialin Pan and Qiang Yang}, journal={IEEE Transactions on Knowledge and Data Engineering}, year={2010}, volume={22}, pages={1345-1359} }
A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution…
15,614 Citations
Knowledge Transfer Using Cost Sensitive Online Learning Classification
- Computer Science
- 2015
A survey on cost sensitive on machine learning and various methods used is focus on online learning methods.
Knowledge Transfer Using Cost Sensitive Online Learning Classification
- Computer Science
- 2015
A survey on cost sensitive on machine learning and various methods used is focus on online learning methods.
Transfer Learning for Visual Categorization: A Survey
- Computer ScienceIEEE Transactions on Neural Networks and Learning Systems
- 2015
This paper surveys state-of-the-art transfer learning algorithms in visual categorization applications, such as object recognition, image classification, and human action recognition, to find out if they can be efficiently solved.
Transfer Learning: Survey and Classification
- Computer Science
- 2020
This survey paper explains transfer learning along with its categorization and provides examples and perspective related to transfer learning.
A survey of transfer learning
- Computer ScienceJournal of Big Data
- 2016
This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied toTransfer learning, which can be applied to big data environments.
Feature-based transfer learning with real-world applications
- Computer Science
- 2010
A novel dimensionality reduction framework for transfer learning is proposed, which tries to reduce the distance between different domains while preserve data properties as much as possible and is general for many transfer learning problems when domain knowledge is unavailable.
Feature Selection for Transfer Learning
- Computer ScienceECML/PKDD
- 2011
This paper presents a novel method to identify variant and invariant features between two datasets, and formalizes the problem of finding differently distributed features as a convex optimization problem.
Transfer Learning Techniques
- Computer ScienceBig Data Technologies and Applications
- 2016
This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied toTransfer learning, which can be applied to big data environments.
Transfer Learning with Ensemble of Multiple Feature Representations
- Computer Science2018 IEEE 16th International Conference on Software Engineering Research, Management and Applications (SERA)
- 2018
This paper proposes an instance-based transfer learning method, which is a weighted ensemble transfer learning framework with multiple feature representations, which achieves better performance than the traditional transferLearning method and the non-transfer learning method.
An introduction to domain adaptation and transfer learning
- Computer ScienceArXiv
- 2018
In machine learning, if the training data is an unbiased sample of an underlying distribution, then the learned classification function will make accurate predictions for new samples. However, if the…
References
SHOWING 1-10 OF 90 REFERENCES
Boosting for transfer learning
- Computer ScienceICML '07
- 2007
This paper presents a novel transfer learning framework called TrAdaBoost, which extends boosting-based learning algorithms and shows that this method can allow us to learn an accurate model using only a tiny amount of new data and a large amount of old data, even when the new data are not sufficient to train a model alone.
Bridged Refinement for Transfer Learning
- Computer SciencePKDD
- 2007
A novel algorithm, namely bridged refinement, to take the shift of distribution into consideration is proposed, which corrects the labels predicted by a shift-unaware classifier towards a target distribution and takes the mixture distribution of the training and test data as a bridge to better transfer from the training data to the test data.
Dataset Shift in Machine Learning
- Computer Science
- 2009
This volume offers an overview of current efforts to deal with dataset and covariate shift, and places dataset shift in relationship to transfer learning, transduction, local learning, active learning, and semi-supervised learning.
Transferring Naive Bayes Classifiers for Text Classification
- Computer ScienceAAAI
- 2007
This paper proposes a novel transfer-learning algorithm for text classification based on an EM-based Naive Bayes classifiers and shows that the algorithm outperforms the traditional supervised and semi-supervised learning algorithms when the distributions of the training and test sets are increasingly different.
Transfer Learning via Dimensionality Reduction
- Computer ScienceAAAI
- 2008
A new dimensionality reduction method is proposed to find a latent space, which minimizes the distance between distributions of the data in different domains in a latentspace, which can be treated as a bridge of transferring knowledge from the source domain to the target domain.
Multitask Learning
- Computer ScienceMachine Learning
- 2004
Prior work on MTL is reviewed, new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals is presented, and new results for MTL with k-nearest neighbor and kernel regression are presented.
Spectral domain-transfer learning
- Computer ScienceKDD
- 2008
This paper formulate this domain-transfer learning problem under a novel spectral classification framework, where the objective function is introduced to seek consistency between the in-domain supervision and the out-of-domain intrinsic structure through optimization of the cost function.
Self-taught learning: transfer learning from unlabeled data
- Computer ScienceICML '07
- 2007
An approach to self-taught learning that uses sparse coding to construct higher-level features using the unlabeled data to form a succinct input representation and significantly improve classification performance.
Analysis of Representations for Domain Adaptation
- Computer ScienceNIPS
- 2006
The theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.
Logistic regression with an auxiliary data source
- Computer ScienceICML
- 2005
This paper proposes a method to relax the requirement to draw examples from the same source distribution in the context of logistic regression, called "Migratory-Logit" or M- logit, which is demonstrated successfully on simulated as well as real data sets.