Keep It Simple: Data-Efficient Learning for Controlling Complex Systems With Simple Models
@article{Power2021KeepIS, title={Keep It Simple: Data-Efficient Learning for Controlling Complex Systems With Simple Models}, author={Thomas Power and Dmitry Berenson}, journal={IEEE Robotics and Automation Letters}, year={2021}, volume={6}, pages={1184-1191} }
When manipulating a novel object with complex dynamics, a state representation is not always available, for example, deformable objects. Learning both a representation and dynamics from observations requires large amounts of data. We propose Learned Visual Similarity Predictive Control (LVSPC), a novel method for data-efficient learning to control systems with complex dynamics and high-dimensional state spaces from images. LVSPC leverages a given simple model approximation from which image…
7 Citations
Goal-Conditioned Model Simplification for Deformable Object Manipulation
- Computer Science
- 2022
This work explores the idea of goal-conditioned model simplification which has a great potential to improve motion planning, perception, and policy learning and proposes two workflows for objects that can be approximated by lines and surfaces.
Planning with Learned Model Preconditions for Water Manipulation
- Computer Science
- 2022
This work addresses the problem of modeling deformable object dynamics by learning where a set of given high-level dynamics models are accurate: a model precondition, which is then used to model trajectories using states and closed-loop actions where the dynamics model are accurate.
Variational Inference MPC using Normalizing Flows and Out-of-Distribution Projection
- Computer ScienceRobotics: Science and Systems XVIII
- 2022
A Model Predictive Control method for collision-free navigation that uses amortized variational inference to approximate the distribution of optimal control sequences by training a normalizing flow conditioned on the start, goal and environment and an approach that performs projection on the representation of the environment as part of the MPC process is presented.
Contact-Rich Manipulation of a Flexible Object based on Deep Predictive Learning using Vision and Tactility
- Computer Science2022 International Conference on Robotics and Automation (ICRA)
- 2022
A model that can perform contact-rich flexible object manipulation by real-time prediction of vision with tactility was developed, which introduced a point-based attention mechanism for extracting image features, softmax transformation for predicting motions, and convolutional neural network for extracting tactile features.
Challenges and Outlook in Robotic Manipulation of Deformable Objects
- EngineeringIEEE Robotics & Automation Magazine
- 2022
This article reviews recent advances in deformable object manipulation and highlights the main challenges when considering deformation in each sub-field, and proposes future directions of research.
Variational Inference MPC for Robot Motion with Normalizing Flows
- Computer Science
- 2021
This paper proposes using amortized variational inference to approximate the posterior with a normalizing conditioned on the start, goal and environment and demonstrates that this approach generalizes to a difficult novel environment and outperform a baseline sampling-based MPC method on a navigation problem.
Learning Model Preconditions for Planning with Multiple Models
- Computer ScienceCoRL
- 2021
This work learns model deviation estimators (MDEs) to predict the error between real-world states and the states outputted from skill effect models and uses the prediction from MDEs to switch between various models in order to speed up planning when possible.
References
SHOWING 1-10 OF 34 REFERENCES
Information theoretic MPC for model-based reinforcement learning
- Computer Science2017 IEEE International Conference on Robotics and Automation (ICRA)
- 2017
An information theoretic model predictive control algorithm capable of handling complex cost criteria and general nonlinear dynamics and using multi-layer neural networks as dynamics models to solve model-based reinforcement learning tasks is introduced.
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
- Computer ScienceICML
- 2020
CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features and is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.
Learning Latent Dynamics for Planning from Pixels
- Computer ScienceICML
- 2019
The Deep Planning Network (PlaNet) is proposed, a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space using a latent dynamics model with both deterministic and stochastic transition components.
The unscented Kalman filter for nonlinear estimation
- MathematicsProceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373)
- 2000
This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital…
Parametric Gaussian Process Regressors
- Computer ScienceICML
- 2020
In an extensive empirical comparison with a number of alternative methods for scalable GP regression, it is found that the resulting predictive distributions exhibit significantly better calibrated uncertainties and higher log likelihoods--often by as much as half a nat per datapoint.
Learning When to Trust a Dynamics Model for Planning in Reduced State Spaces
- Computer ScienceIEEE Robotics and Automation Letters
- 2020
This letter presents a formulation for planning in reduced state spaces that uses a classifier to bias the planner away from state-action pairs that are not reliably feasible under the true dynamics, as well as an application of the framework to rope manipulation, where the VEB is used.
Dream to Control: Learning Behaviors by Latent Imagination
- Computer ScienceICLR
- 2020
Dreamer is presented, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination and efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model.
Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects
- Computer ScienceIEEE Robotics and Automation Letters
- 2020
This work is the first to demonstrate self-supervised training of rope state estimation on real images, without requiring expensive annotations, and trains a fast and differentiable neural network dynamics model that encodes the physics of mass-spring systems.
PyTorch: An Imperative Style, High-Performance Deep Learning Library
- Computer ScienceNeurIPS
- 2019
This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.