Towards more Reliable Transfer Learning

@article{Wang2018TowardsMR,
  title={Towards more Reliable Transfer Learning},
  author={Zirui Wang and Jaime G. Carbonell},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.02235}
}
Multi-source transfer learning has been proven effective when within-target labeled data is scarce. Previous work focuses primarily on exploiting domain similarities and assumes that source domains are richly or at least comparably labeled. While this strong assumption is never true in practice, this paper relaxes it and addresses challenges related to sources with diverse labeling volume and diverse reliability. The first challenge is combining domain similarity and source reliability by… 

Characterizing and Avoiding Negative Transfer

A novel technique is proposed to circumvent negative transfer by filtering out unrelated source data based on adversarial networks, which is highly generic and can be applied to a wide range of transfer learning algorithms.

Overcoming Negative Transfer: A Survey

This survey attempts to analyze the factors related to negative transfer and summarizes the theories and advances of overcoming negative transfer from four crucial aspects: source data quality, target data quality), domain divergence and generic algorithms, which may provide the readers an insight into the current research status and ideas.

A Survey on Negative Transfer

The definition of negative transfer is considered, and a taxonomy of the factors are discussed, and near fifty representative approaches for handling NT are categorized and reviewed, from four perspectives: secure transfer, domain similarity estimation, distant transfer and negative transfer mitigation.

Efficient Meta Lifelong-Learning with Limited Memory

This paper identifies three common principles of lifelong learning methods and proposes an efficient meta-lifelong framework that combines them in a synergistic fashion and alleviates both catastrophic forgetting and negative transfer at the same time.

A novel active multi-source transfer learning algorithm for time series forecasting

This paper proposes a new Multi-source TSF Transfer Learning algorithm, abbreviated as the MultiSrcTL algorithm, and a novel Active Multi-Source Transfer Learning, abbreviations as the AcMultiSrc TL algorithm, with the latter one integrating Multi- source TL with Active Learning (AL), and taking the former one as its sub-algorithm.

A Simple Approach to Balance Task Loss in Multi-Task Learning

This paper proposes a Balanced Multi-Task Learning (BMTL) framework, which proposes to transform the training loss of each task to balance different tasks based on an intuitive idea that tasks with larger training losses will receive more attention during the optimization procedure.

Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models

This work derives a simple and scalable optimization procedure, named Gradient Vaccine, which encourages more geometrically aligned parameter updates for close tasks and reveals the importance of properly measuring and utilizing language proximity in multilingual optimization, and has broader implications for multi-task learning beyond multilingual modeling.

On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment

The results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations and presenting a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference.

References

SHOWING 1-10 OF 36 REFERENCES

Joint Transfer and Batch-mode Active Learning

This work presents an integrated framework that performs transfer and active learning simultaneously by solving a single convex optimization problem by minimizing a common objective of reducing distribution difference between the data set consisting of re-weighted source and the queried target domain data and the set of unlabeled targetdomain data.

Transfer learning from multiple source domains via consensus regularization

This work proposes a consensus regularization framework for transfer learning from multiple source domains to a target domain, in which a local classifier is trained by considering both local data available in a source domain and the prediction consensus with the classifiers from other source domains.

Completely Heterogeneous Transfer Learning with Attention - What And What Not To Transfer

A new heterogeneous transfer learning approach that selects and attends to an optimized subset of source samples to transfer knowledge from, and builds a unified transfer network that learns from both source and target knowledge is defined.

Active Transfer Learning under Model Shift

This work proposes two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks.

Source-Target Similarity Modelings for Multi-Source Transfer Gaussian Process Regression

This paper investigates the feasibility and performance of a family of transfer covariance functions that represent the pairwise similarity of each source and the target domain and proposes TCMSStack, an integrated strategy incorporating the benefits of theTransfer covariance function and stacking.

Domain Adaptation via Transfer Component Analysis

This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.

Transfer Learning with Active Queries from Source Domain

This paper jointly performs transfer learning and active learning by querying the most valuable information from the source domain by integrating the computation of importance weights for domain adaptation and the instance selection for active queries into one unified framework based on distribution matching.

Domain adaptation from multiple sources via auxiliary classifiers

A new data-dependent regularizer based on smoothness assumption into Least-Squares SVM (LS-SVM), which enforces that the target classifier shares similar decision values with the auxiliary classifiers from relevant source domains on the unlabeled patterns of the target domain.

A Two-Stage Weighting Framework for Multi-Source Domain Adaptation

A two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences as well as conditional probability differences with the target domain data, using the weighted Rademacher complexity measure is proposed.

A Survey on Transfer Learning

The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.