Gaussian Processes for Data-Efficient Learning in Robotics and Control

@article{Deisenroth2015GaussianPF,
  title={Gaussian Processes for Data-Efficient Learning in Robotics and Control},
  author={Marc Peter Deisenroth and Dieter Fox and Carl Edward Rasmussen},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2015},
  volume={37},
  pages={408-423}
}
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem… 
Policy search for learning robot control using sparse data
TLDR
This paper investigates how model-based reinforcement learning, in particular the probabilistic inference for learning control method (Pilco), can be tailored to cope with the case of sparse data to speed up learning, and shows that by including prior knowledge, policy learning can be sped up in presence of sparseData.
Bayesian learning for data-efficient control
TLDR
This thesis uses probabilistic Bayesian modelling to learn systems from scratch, similar to the PILCO algorithm, and takes a step towards data efficient learning of high-dimensional control using Bayesian neural networks (BNN).
Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control
TLDR
This work proposes a model-based RL framework based on probabilistic Model Predictive Control based on Gaussian Processes to incorporate model uncertainty into long-term predictions, thereby, reducing the impact of model errors and provides theoretical guarantees for first-order optimality in the GP-based transition models with deterministic approximate inference for long- term planning.
Gaussian Processes in Reinforcement Learning: Stability Analysis and Efficient Value Propagation
TLDR
Two current limitations ofmodel-based RL that are indispensable prerequisites for widespread deployment of model- based RL in real world tasks are addressed and an approximation based on numerical quadrature that can handle complex state distributions is proposed.
Toward Faster Reinforcement Learning for Robotics: Using Gaussian Processes
TLDR
This work will try to leverage the abilities of the computational graphs to produce a ROS friendly python implementation of PILCO, and discuss a case study of a real world robotic task.
Toward faster reinforcement learning for robotics applications by using Gaussian processes
  • A. Younes, A. S. Yushchenko
  • Computer Science
    XLIII ACADEMIC SPACE CONFERENCE: dedicated to the memory of academician S.P. Korolev and other outstanding Russian scientists – Pioneers of space exploration
  • 2019
TLDR
This work proposes using Gaussian processes to improve the efficiency of the Reinforcement learning, where GP will learn a state transition model from robot (interaction) phase, and after that the GP is used to simulate trajectories and optimize the robot’s controller in a (simulation) phase.
Efficient reinforcement learning for robots using informative simulated priors
  • M. Cutler, J. How
  • Computer Science
    2015 IEEE International Conference on Robotics and Automation (ICRA)
  • 2015
TLDR
A novel method for transferring data from a simulator to a robot, using simulated data as a prior for real-world learning, and results show the benefits of using the prior knowledge in the learning framework.
Goal-driven dynamics learning via Bayesian optimization
TLDR
This work uses Bayesian optimization in an active learning framework where a locally linear dynamics model is learned with the intent of maximizing the control performance, and used in conjunction with optimal control schemes to efficiently design a controller for a given task.
Enhanced Probabilistic Inference Algorithm Using Probabilistic Neural Networks for Learning Control
TLDR
A Probabilistic neural network (PNN) is proposed to replace the GP in building probabilistic dynamics models and develop a deterministic control policy by using long-term predictions that can reconcile data efficiency and speed of learning even in high-dimensional observation spaces.
Reinforcement Learning for Robotics and Control with Active Uncertainty Reduction
TLDR
This work introduces active uncertainty reduction-based virtual environments, which are formed through limited trials conducted in the original environment and provides an efficient method for uncertainty management, which is used as a metric for self-improvement by identification of the points with maximum expected improvement through adaptive sampling.
...
...

References

SHOWING 1-10 OF 73 REFERENCES
Learning to Control a Low-Cost Manipulator using Data-Efficient Reinforcement Learning
TLDR
It is demonstrated how a low-cost off-the-shelf robotic system can learn closed-loop policies for a stacking task in only a handful of trials-from scratch.
Gaussian process dynamic programming
Probabilistic model-based imitation learning
TLDR
This work proposes to learn a probabilistic model of the system, which is exploited for mental rehearsal of the current controller by making predictions about future trajectories, and learns a robot-specific controller that directly matches robot trajectories with observed ones.
Autonomous helicopter control using reinforcement learning policy search methods
  • J. Bagnell, J. Schneider
  • Computer Science
    Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164)
  • 2001
TLDR
This work considers algorithms that evaluate and synthesize controllers under distributions of Markovian models and demonstrates the presented learning control algorithm by flying an autonomous helicopter and shows that the controller learned is robust and delivers good performance in this real-world domain.
Gaussian Processes and Reinforcement Learning for Identification and Control of an Autonomous Blimp
TLDR
This paper shows how the GP-enhanced model can be used in conjunction with reinforcement learning to generate a blimp controller that is superior to those learned with ODE or GP models alone.
Learning by Demonstration
  • S. Schaal
  • Education, Computer Science
    Encyclopedia of Machine Learning and Data Mining
  • 1996
TLDR
In an implementation of pole balancing on a complex anthropomorphic robot arm, it is demonstrated that, when facing the complexities of real signal processing, model-based reinforcement learning offers the most robustness for LQR problems.
A Survey on Policy Search for Robotics
TLDR
This work classifies model-free methods based on their policy evaluation strategy, policy update strategy, and exploration strategy and presents a unified view on existing algorithms.
Policy Gradient Methods for Robotics
  • Jan Peters, S. Schaal
  • Computer Science
    2006 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2006
TLDR
An overview on learning with policy gradient methods for robotics with a strong focus on recent advances in the field is given and how the most recently developed methods can significantly improve learning performance is shown.
Efficient Non-Linear Control by Combining Q-learning with Local Linear Controllers
TLDR
A hierarchical RL algorithm composed of local linear controllers and Q-learning, which are both very simple and found in less time than those of conventional discrete RL methods to solve a non-linear control problem in which state and action spaces are continuous.
Model-based imitation learning by probabilistic trajectory matching
TLDR
This paper proposes to learn probabilistic forward models to compute a probability distribution over trajectories, and compares the approach to model-based reinforcement learning methods with hand-crafted cost functions and evaluates the method with experiments on a real compliant robot.
...
...