OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

@inproceedings{Wong2022OSCARDO,
  title={OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation},
  author={J. Wong and Viktor Makoviychuk and Anima Anandkumar and Yuke Zhu},
  booktitle={ICRA},
  year={2022}
}
Learning performant robot manipulation policies can be challenging due to high-dimensional continuous actions and complex physics-based dynamics. This can be alleviated through intelligent choice of action space. Operational Space Control (OSC) has been used as an effective taskspace controller for manipulation. Nonetheless, its strength depends on the underlying modeling fidelity, and is prone to failure when there are modeling errors. In this work, we propose OSC for Adaptation and Robustness… 

Figures and Tables from this paper

Factory: Fast Contact for Robotic Assembly
TLDR
This work presents Factory, a set of physics simulation methods and robot learning tools for robotic assembly, and achieves real-time or faster simulation of a wide range of contact-rich scenes, including simultaneous simulation of 1000 nut-and-bolt interactions.
Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning
TLDR
This research presents Isaac Gym, a high performance learning platform to train policies for a wide variety of robotics tasks entirely on GPU that leads to blazing fast training times for complex robotics tasks on a single GPU with 2-3 orders of magnitude improvements compared to conventional RL training.

References

SHOWING 1-10 OF 52 REFERENCES
Learning to Control in Operational Space
TLDR
The proposed method works in the setting of learning resolved motion rate control on a real, physical Mitsubishi PA-10 medical robotics arm and demonstrates feasibility for complex high degree-of-freedom robots.
Learning force control policies for compliant manipulation
TLDR
This work presents an approach to acquiring manipulation skills on compliant robots through reinforcement learning, and uses the Policy Improvement with Path Integrals (PI2) algorithm to learn these force/torque profiles by optimizing a cost function that measures task success.
Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks
TLDR
It is shown that VICES improves sample efficiency, maintains low energy consumption, and ensures safety across all three experimental setups, and RL policies learned with VICES can transfer across different robot models in simulation, and from simulation to real for the same robot.
Reinforcement Learning on Variable Impedance Controller for High-Precision Robotic Assembly
TLDR
This paper explicitly considers incorporating operational space force/torque information into reinforcement learning; this is motivated by humans heuristically mapping perceived forces to control actions, which results in completing high-precision tasks in a fairly easy manner.
Fundamental Challenges in Deep Learning for Stiff Contact Dynamics
  • Mihir Parmar, Mathew Halm, Michael Posa
  • Biology
    2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2021
TLDR
Emp empirical evidence that learning an accurate model in the first place can be confounded by contact is presented, as modern deep learning approaches are not designed to capture this non-smoothness of contact.
Sim-to-Real Transfer for Biped Locomotion
TLDR
This work presents a new approach for transfer of dynamic robot control policies such as biped locomotion from simulation to real hardware and uses Bayesian Optimization to determine the values for $\eta$ that optimize the performance of PUP on the real hardware.
Deep Lagrangian Networks for end-to-end learning of energy-based control for under-actuated systems
TLDR
The resulting DeLaN for energy control (DeLaN 4EC) is the first model learning approach using generic function approximation that is capable of learning energy control because existing approaches cannot learn the system energies directly.
Learning Robotic Manipulation through Visual Planning and Acting
TLDR
This work learns to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object, and shows that separating the problem into visual planning and visual tracking control is more efficient and more interpretable than alternative data-driven approaches.
Benchmarking Reinforcement Learning Algorithms on Real-World Robots
TLDR
This work introduces several reinforcement learning tasks with multiple commercially available robots that present varying levels of learning difficulty, setup, and repeatability and test the learning performance of off-the-shelf implementations of four reinforcement learning algorithms and analyzes sensitivity to their hyper-parameters to determine their readiness for applications in various real-world tasks.
Reinforcement learning by reward-weighted regression for operational space control
TLDR
This work uses a generalization of the EM-base reinforcement learning framework suggested by Dayan & Hinton to reduce the problem of learning with immediate rewards to a reward-weighted regression problem with an adaptive, integrated reward transformation for faster convergence.
...
...