Share This Author
A survey of robot learning from demonstration
Interactive Policy Learning through Confidence-Based Autonomy
The algorithm selects demonstrations based on a measure of action selection confidence, and results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher.
Effect of human guidance and state space size on Interactive Reinforcement Learning
This work presents the first study of Interactive Reinforcement Learning in real-world robotic systems and reports on four experiments that study the effects that teacher guidance and state space size have on policy learning performance.
Robot Learning from Human Teachers
This book provides an introduction to the field with a focus on the unique technical challenges associated with designing robots that learn from naive human teachers, and provides best practices for evaluation of LfD systems.
Reinforcement Learning from Demonstration through Shaping
- T. Brys, A. Harutyunyan, Halit Bener Suay, S. Chernova, Matthew E. Taylor, A. Nowé
- Computer Science, EducationIJCAI
- 25 July 2015
This paper investigates the intersection of reinforcement learning and expert demonstrations, leveraging the theoretical guarantees provided by reinforcement learning, and using expert demonstrations to speed up this learning by biasing exploration through a process called reward shaping.
Confidence-based policy learning from demonstration using Gaussian mixture models
This work contributes an approach for interactive policy learning through expert demonstration that allows an agent to actively request and effectively represent demonstration examples, and introduces the confident execution approach, which focuses learning on relevant parts of the domain by enabling the agent to identify the need for and request demonstrations for specific part of the state space.
Integrating reinforcement learning with human demonstrations of varying ability
This work introduces Human-Agent Transfer (HAT), an algorithm that combines transfer learning, learning from demonstration and reinforcement learning to achieve rapid learning and high performance in…
An evolutionary approach to gait learning for four-legged robots
- S. Chernova, M. Veloso
- Computer ScienceIEEE/RSJ International Conference on Intelligent…
- 28 September 2004
This paper presents a new algorithm for walk optimization based on an evolutionary approach, which makes it more robust to noise in parameter evaluations and avoids prematurely converging to local optima, a problem encountered by both of the previously suggested algorithms.
Rearrangement: A Challenge for Embodied AI
A framework for research and evaluation in Embodied AI is described, based on a canonical task: Rearrangement, that can focus the development of new techniques and serve as a source of trained models that can be transferred to other settings.
Are We Making Real Progress in Simulated Environments? Measuring the Sim2Real Gap in Embodied Visual Navigation
The Habitat-PyRobot Bridge is developed, a library for seamless execution of identical code on a simulated agent and a physical robot and a new metric called Sim-vs-Real Correlation Coefficient (SRCC) is presented to quantify sim2real predictivity, which is largely due to AI agents learning to 'cheat' by exploiting simulator imperfections.