Transfer learning by prototype generation in continuous spaces

@article{Cote2016TransferLB,
  title={Transfer learning by prototype generation in continuous spaces},
  author={Enrique Munoz de Cote and Esteban O. Garcia and Eduardo F. Morales},
  journal={Adaptive Behavior},
  year={2016},
  volume={24},
  pages={464 - 478}
}
In machine learning, learning a task is expensive (many training samples are needed) and it is therefore of general interest to be able to reuse knowledge across tasks. This is the case in aerial robotics applications, where an autonomous aerial robot cannot interact with the environment hazard free. Prototype generation is a well known technique commonly used in supervised learning to help reduce the number of samples needed to learn a task. However, little is known about how such techniques… 

Figures and Tables from this paper

A Reinforcement Learning Method for Continuous Domains Using Artificial Hydrocarbon Networks

TLDR
The proposed method considers modeling the dynamics of the continuous task with the supervised AHN method, and initial random rollouts and posterior data collection from policy evaluation improve the training of the AHN-based dynamics model.

Transfer Learning for Multiagent Reinforcement Learning Systems

TLDR
This research aims to propose a Transfer Learning (TL) framework to accelerate learning by exploiting two knowledge sources: previously learned tasks; and advising from a more experienced agent.

Learning to Predict Consequences as a Method of Knowledge Transfer in Reinforcement Learning

TLDR
This work proposes a very natural style of knowledge transfer, in which the agent learns to predict actions’ environmental consequences using agent-centric information, and shows that this knowledge transfer approach can allow faster and lower cost learning than existing alternatives.

DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics

TLDR
This work introduces the redescription cycle, a third cycle working at an even slower time scale to generate or adapt the required representations to the robot, its environment and the task, and presents DREAM (Deferred Restructuring of Experience in Autonomous Machines), a developmental cognitive architecture to bootstrap this redescription process stage by stage.

A Survey on Transfer Learning for Multiagent Reinforcement Learning Systems

TLDR
A taxonomy of solutions for the general knowledge reuse problem is defined, providing a comprehensive discussion of recent progress on knowledge reuse in Multiagent Systems (MAS) and of techniques for knowledge reuse across agents (that may be actuating in a shared environment or not).

Safe reinforcement learning using risk mapping by similarity

TLDR
This work contributes with a new approach to consider risk based on similarity and with RMS, an algorithm for discrete scenarios which infers the risk of newly discovered states by analyzing how similar they are to previously known risky states.

Efficient one-dimensional turbomachinery design method based on transfer learning and Bayesian optimization

TLDR
This paper demonstrates an efficient transfer optimization method for the high-nonlinear 1D turbine design problem that can reduce the computational cost by more than 30% while maintaining the same aerodynamic performance.

References

SHOWING 1-10 OF 45 REFERENCES

Transfer Learning for continuous State and Action Spaces

TLDR
This work presents a novel approach to transfer knowledge between tasks in a reinforcement learning (RL) framework with continuous states and actions, where the transition and policy functions are approximated by Gaussian processes.

Transfer of samples in batch reinforcement learning

TLDR
A novel algorithm is introduced that transfers samples from the source tasks that are mostly similar to the target task, and is empirically show that, following the proposed approach, the transfer of samples is effective in reducing the learning complexity.

Reinforcement learning transfer via sparse coding

TLDR
Empirically shows that the learned inter-task mapping can be successfully used to improve the performance of a learned policy on a fixed number of environmental samples, reduce the learning times needed by the algorithms to converge to a policy onA fixednumber of samples, and converge faster to a near-optimal policy given a large number of samples.

Accelerating Reinforcement Learning by Composing Solutions of Automatically Identified Subtasks

TLDR
A system that accelerates reinforcement learning by using transfer from related tasks that achieves much of its power by transferring parts of previously learned solutions rather than a single complete solution.

Unsupervised Cross-Domain Transfer in Policy Gradient Reinforcement Learning via Manifold Alignment

TLDR
An autonomous framework is introduced that uses unsupervised manifold alignment to learn inter-task mappings and effectively transfer samples between different task domains and demonstrates its effectiveness for cross-domain transfer in the context of policy gradient RL.

Qualitative Transfer for Reinforcement Learning with Continuous State and Action Spaces

TLDR
A novel approach to transfer knowledge between reinforcement learning tasks with continuous states and actions, where the transition and policy functions are approximated by Gaussian Processes GPs, by using the GPs' hyper-parameters to represent the state transition function in the source task.

Learning to Control a Low-Cost Manipulator using Data-Efficient Reinforcement Learning

TLDR
It is demonstrated how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials-from scratch.

Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods

TLDR
A novel actor-critic approach in which the policy of the actor is estimated through sequential Monte Carlo methods, and results obtained in a control problem consisting of steering a boat across a river are reported.

Transfer Learning via Inter-Task Mappings for Temporal Difference Learning

TLDR
This article compares learning on a complex task with three function approximators, a cerebellar model arithmetic computer (CMAC), an artificial neural network (ANN), and a radial basis function (RBF), and empirically demonstrates that directly transferring the action-value function can lead to a dramatic speedup in learning with all three.

Transfer Learning for Reinforcement Learning Domains: A Survey

TLDR
This article presents a framework that classifies transfer learning methods in terms of their capabilities and goals, and then uses it to survey the existing literature, as well as to suggest future directions for transfer learning work.