• Corpus ID: 253019003

Learning Multi-Objective Curricula for Robotic Policy Learning

@inproceedings{Kang2021LearningMC,
  title={Learning Multi-Objective Curricula for Robotic Policy Learning},
  author={Jikun Kang and Miao Liu and Abhinav Gupta and Christopher Joseph Pal and Xuefei Liu and Jie Fu},
  year={2021}
}
: Various automatic curriculum learning (ACL) methods have been proposed to improve the sample efficiency and final performance of robots’ policies learning. They are designed to control how a robotic agent collects data, which is inspired by how humans gradually adapt their learning processes to their capabilities. In this paper, we propose a unified automatic curriculum learning framework to create multi-objective but coherent curricula that are generated by a set of parametric curriculum… 

References

SHOWING 1-10 OF 65 REFERENCES

Autonomous Task Sequencing for Customized Curriculum Design in Reinforcement Learning

This paper forms the design of a curriculum as a Markov Decision Process, which directly models the accumulation of knowledge as an agent interacts with tasks, and proposes a method that approximates an execution of an optimal policy in this MDP to produce an agent-specific curriculum.

Meta Automatic Curriculum Learning

This work presents AGAIN, a first instantiation of Meta-ACL, and showcases its benefits for curriculum generation over classical ACL in multiple simulated environments including procedurally generated parkour environments with learners of varying morphologies.

Learning Curriculum Policies for Reinforcement Learning

The method is extended to handle multiple transfer learning algorithms, and it is shown for the first time that a curriculum policy over this MDP can be learned from experience.

Source Task Creation for Curriculum Learning

This paper presents the more ambitious problem of curriculum learning in reinforcement learning, in which the goal is to design a sequence of source tasks for an agent to train on, such that final performance or learning speed is improved.

Reverse Curriculum Generation for Reinforcement Learning

This work proposes a method to learn goal-oriented tasks without requiring any prior knowledge other than obtaining a single state in which the task is achieved, and generates a curriculum of start states that adapts to the agent's performance, leading to efficient training on goal- oriented tasks.

Automatic Goal Generation for Reinforcement Learning Agents

This work uses a generator network to propose tasks for the agent to try to achieve, specified as goal states, and shows that, by using this framework, an agent can efficiently and automatically learn to perform a wide set of tasks without requiring any prior knowledge of its environment.

CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning

CausalWorld is proposed, a benchmark for causal structure and transfer learning in a robotic manipulation environment that is a simulation of an open-source robotic platform, hence offering the possibility of sim-to-real transfer.

Automatic Curriculum Graph Generation for Reinforcement Learning Agents

This work introduces a method to generate a curriculum based on task descriptors and a novel metric of transfer potential that automatically generates a curriculum as a directed acyclic graph (as opposed to a linear sequence as done in existing work).

Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning

This work defines a novel method of multitask and transfer learning that enables an autonomous agent to learn how to behave in multiple tasks simultaneously, and then generalize its knowledge to new domains, and uses Atari games as a testing environment to demonstrate these methods.

Composable Deep Reinforcement Learning for Robotic Manipulation

This paper shows that policies learned with soft Q-learning can be composed to create new policies, and that the optimality of the resulting policy can be bounded in terms of the divergence between the composed policies.
...