MPC with Sensor-Based Online Cost Adaptation

@article{Meduri2022MPCWS,
  title={MPC with Sensor-Based Online Cost Adaptation},
  author={Avadesh Meduri and Huaijiang Zhu and Armand Jordana and Ludovic Righetti},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.09451}
}
—Model predictive control is a powerful tool to generate complex motions for robots. However, it often requires solving non-convex problems online to produce rich behaviors, which is computationally expensive and not always practical in real time. Additionally, direct integration of high dimensional sensor data (e.g. RGB-D images) in the feedback loop is challenging with current state-space methods. This paper aims to address both issues. It introduces a model predictive control scheme, where a… 

Figures from this paper

References

SHOWING 1-10 OF 30 REFERENCES

Fast Joint Space Model-Predictive Control for Reactive Manipulation

A joint space sampling-based MPC for manipulators that can be efficiently parallelized using GPUs that can handle task and joint space constraints while taking less than 0.02 seconds (50Hz) to compute the next control command.

Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control

This work proposes a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction and is the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.

High-Frequency Nonlinear Model Predictive Control of a Manipulator

This paper presents the first hardware implementation of closed-loop nonlinear MPC on a 7-DoF torque-controlled robot and leverages a state-of-the art optimal control solver, namely Differential Dynamic Programming (DDP), in order to replan state and control trajectories at real-time rates (1kHz).

An integrated system for real-time model predictive control of humanoid robots

An integrated system based on real-time model-predictive control (MPC) applied to the full dynamics of the robot, which is possible due to the speed of the new physics engine (MuJoCo), the efficiency of the trajectory optimization algorithm, and the contact smoothing methods developed for the purpose of control optimization.

Whole-body model-predictive control applied to the HRP-2 humanoid

This paper implemented a complete model-predictive controller and applied it in real-time on the physical HRP-2 robot, the first time that such a whole-body model predictive controller is applied in real time on a complex dynamic robot.

Differentiable MPC for End-to-end Planning and Control

The foundations for using Model Predictive Control as a differentiable policy class for reinforcement learning in continuous state and action spaces are presented and it is shown that the MPC policies are significantly more data-efficient than a generic neural network and that the method is superior to traditional system identification in a setting where the expert is unrealizable.

Learning Convex Optimization Control Policies

This paper proposes a method to automate the tuning of convex optimization control policies by adjusting the parameters using an approximate gradient of the performance metric with respect to the parameters.

Agile Maneuvers in Legged Robots: a Predictive Control Approach

This work proposes a hybrid predictive controller that considers the robot’s actuation limits and full-body dynamics, and combines the feedback policies with tactile information to locally predict future actions and is the first to handle actuation Limits, generate agile locomotion maneuvers, and execute optimal feedback policies for low level torque control without the use of a separate whole-body controller.

End-to-End Training of Deep Visuomotor Policies

This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method.

Agile Autonomous Driving using End-to-End Deep Imitation Learning

This work presents an end-to-end imitation learning system for agile, off-road autonomous driving using only low-cost sensors and shows that policies trained with online imitation learning overcome well-known challenges related to covariate shift and generalize better than policies training with batch imitation learning.