• Corpus ID: 6534170

Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning

@article{Forestier2017IntrinsicallyMG,
  title={Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning},
  author={S{\'e}bastien Forestier and Yoan Mollard and Pierre-Yves Oudeyer},
  journal={ArXiv},
  year={2017},
  volume={abs/1708.02190}
}
Intrinsically motivated spontaneous exploration is a key enabler of autonomous lifelong learning in human children. [] Key Method The IMGEP algorithmic architecture relies on several principles: 1) self-generation of goals as parameterized reinforcement learning problems; 2) selection of goals based on intrinsic rewards; 3) exploration with parameterized time-bounded policies and fast incremental goal-parameterized policy search; 4) systematic reuse of information acquired when targeting a goal for improving…

Figures and Tables from this paper

CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning

CURIOUS is proposed, an algorithm that leverages a modular Universal Value Function Approximator with hindsight learning to achieve a diversity of goals of different kinds within a unique policy and an automated curriculum learning mechanism that biases the attention of the agent towards goals maximizing the absolute learning progress.

Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

This work proposes to use deep representation learning algorithms to learn an adequate goal space, and presents experiments where a simulated robot arm interacts with an object, and shows that exploration algorithms using such learned representations can match the performance obtained using engineered representations.

INTRINSICALLY MOTIVATED GOAL EXPLORATION

This work proposes to use deep representation learning algorithms to learn an adequate goal space, and presents experiments where a simulated robot arm interacts with an object, and shows that exploration algorithms using such learned representations can match the performance obtained using engineered representations.

Intrinsically Motivated Goal-Conditioned Reinforcement Learning: a Short Survey

A typology of methods where deep RL algorithms are trained to tackle the developmental robotics problem of the autonomous acquisition of open-ended repertoires of skills is proposed at the intersection of deep RL and developmental approaches.

Intrinsically Motivated Exploration of Learned Goal Spaces

This article shows that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces and paving the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently.

CURIOUS: Intrinsically Motivated Multi-Task, Multi-Goal Reinforcement Learning

CURIOUS is proposed, an extension of Universal Value Function Approximators that enables intrinsically motivated agents to learn to achieve both multiple tasks and multiple goals within a unique policy, leveraging hindsight learning.

Autonomous learning of multiple curricula with non-stationary interdependencies*

This work proposes a new hierarchical architecture (H-GRAIL) that selects its own goals on the basis of intrinsic motivations and treats curriculum learning of interdependent tasks as a Markov Decision Process.

Learning a Set of Interrelated Tasks by Using a Succession of Motor Policies for a Socially Guided Intrinsically Motivated Learner

This paper proposes an active learning algorithmic architecture capable of organizing its learning process in order to achieve a field of complex tasks by learning sequences of primitive motor policies and shows on a simulated environment that this new architecture is capable of tackling the learning of complex motor policies by adapting the complexity of its policies to the task at hand.

Language Grounding through Social Interactions and Curiosity-Driven Multi-Goal Learning

LE2 (Language Enhanced Exploration) is proposed, a learning algorithm leveraging intrinsic motivations and natural language (NL) interactions with a descriptive social partner (SP) that can learn an NL-conditioned reward function to formulate goals for intrinsically motivated goal exploration and learn a goal- Conditioned policy.

Autonomous Goal Exploration using Learned Goal Spaces for Visuomotor Skill Acquisition in Robots

Recent results showing the applicability of Intrinsically Motivated Goal Exploration Processes principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience are presented.
...

References

SHOWING 1-10 OF 76 REFERENCES

CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning

CURIOUS is proposed, an algorithm that leverages a modular Universal Value Function Approximator with hindsight learning to achieve a diversity of goals of different kinds within a unique policy and an automated curriculum learning mechanism that biases the attention of the agent towards goals maximizing the absolute learning progress.

Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration

This work proposes to use deep representation learning algorithms to learn an adequate goal space, and presents experiments where a simulated robot arm interacts with an object, and shows that exploration algorithms using such learned representations can match the performance obtained using engineered representations.

CURIOUS: Intrinsically Motivated Multi-Task, Multi-Goal Reinforcement Learning

CURIOUS is proposed, an extension of Universal Value Function Approximators that enables intrinsically motivated agents to learn to achieve both multiple tasks and multiple goals within a unique policy, leveraging hindsight learning.

Active learning of inverse models with intrinsically motivated goal exploration in robots

Intrinsically motivated goal exploration for active motor learning in robots: A case study

We introduce the Self-Adaptive Goal Generation - Robust Intelligent Adaptive Curiosity (SAGG-RIAC) algorithm as an intrinsically motivated goal exploration mechanism which allows a redundant robot to

Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation

h-DQN is presented, a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning, and allows for flexible goal specifications, such as functions over entities and relations.

Curiosity Driven Exploration of Learned Disentangled Goal Spaces

It is shown that using a disentangled goal space leads to better exploration performances than an entangled goal space and that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.

Intrinsically Motivated Learning in Natural and Artificial Systems

This book introduces the concept of intrinsic motivation in artificial systems, reviews the relevant literature, offers insights from the neural and behavioural sciences, and presents novel tools for research.

Visual Reinforcement Learning with Imagined Goals

An algorithm is proposed that acquires general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies, efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques.

Teacher algorithms for curriculum learning of Deep RL in continuously parameterized environments

This work considers the problem of how a teacher algorithm can enable an unknown Deep Reinforcement Learning (DRL) student to become good at a skill over a wide range of diverse environments and presents a new algorithm modeling absolute learning progress with Gaussian mixture models (ALP-GMM).
...