• Corpus ID: 237417334

A Bayesian Approach to (Online) Transfer Learning: Theory and Algorithms

  title={A Bayesian Approach to (Online) Transfer Learning: Theory and Algorithms},
  author={Xuetong Wu and Jonathan H. Manton and Uwe Aickelin and Jingge Zhu},
Transfer learning is a machine learning paradigm where knowledge from one problem is utilized to solve a new but related problem. While conceivable that knowledge from one task could be useful for solving a related task, if not executed properly, transfer learning algorithms can impair the learning performance instead of improving it — commonly known as negative transfer. In this paper, we study transfer learning from a Bayesian perspective, where a parametric statistical model is used… 


Information-theoretic analysis for transfer learning
The results suggest, perhaps as expected, that the Kullback-Leibler (KL) divergence D(μ║μ') plays an important role in characterizing the generalization error in the settings of domain adaptation.
A Survey on Transfer Learning
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.
Online Transfer Learning
Transfer of samples in batch reinforcement learning
A novel algorithm is introduced that transfers samples from the source tasks that are mostly similar to the target task, and is empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity.
Stability and Hypothesis Transfer Learning
It is shown that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the leave- one- out error to find the optimal transfer parameters, even in the presence of a small training set.
Transfer Learning for Reinforcement Learning Domains: A Survey
This article presents a framework that classifies transfer learning methods in terms of their capabilities and goals, and then uses it to survey the existing literature, as well as to suggest future directions for transfer learning work.
Transfer in Reinforcement Learning: A Framework and a Survey
  • A. Lazaric
  • Computer Science
    Reinforcement Learning
  • 2012
This chapter provides a formalization of the general transfer problem, the main settings which have been investigated so far, and the most important approaches to transfer in reinforcement learning.
Theoretical Analysis of Domain Adaptation with Optimal Transport
A theoretical study on the advantages that concepts borrowed from optimal transportation theory can bring to multi-source domain adaptation is provided and the Wasserstein metric can be used as a divergence measure between distributions to obtain generalization guarantees for three different learning settings.
Minimum Excess Risk in Bayesian Learning
The definition and analysis of the minimum excess risk (MER) is extended to the setting with multiple parametric model families and the set with nonparametric models, and some comparisons between the MER in Bayesian learning and the excess risk in frequentist learning are drawn.
Injecting Prior Knowledge for Transfer Learning into Reinforcement Learning Algorithms using Logic Tensor Networks
The paper proposes to use a first-order-logic language grounded in deep neural networks to represent facts about objects and their semantics in the real world to help Reinforcement Learning agents transfer learning.