OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation

  title={OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot Manipulation},
  author={J. Wong and Viktor Makoviychuk and Anima Anandkumar and Yuke Zhu},
  journal={2022 International Conference on Robotics and Automation (ICRA)},
Learning performant robot manipulation policies can be challenging due to high-dimensional continuous actions and complex physics-based dynamics. This can be alleviated through intelligent choice of action space. Operational Space Control (OSC) has been used as an effective task-space controller for manipulation. Nonetheless, its strength depends on the underlying modeling fidelity, and is prone to failure when there are modeling errors. In this work, we propose OSC for Adaptation and… 

Figures and Tables from this paper

Factory: Fast Contact for Robotic Assembly

This work presents Factory, a set of physics simulation methods and robot learning tools for robotic assembly, and achieves real-time or faster simulation of a wide range of contact-rich scenes, including simultaneous simulation of 1000 nut-and-bolt interactions.

Isaac Gym: High Performance GPU-Based Physics Simulation For Robot Learning

This research presents Isaac Gym, a high performance learning platform to train policies for a wide variety of robotics tasks entirely on GPU that leads to blazing fast training times for complex robotics tasks on a single GPU with 2-3 orders of magnitude improvements compared to conventional RL training.



Learning to Control in Operational Space

The proposed method works in the setting of learning resolved motion rate control on a real, physical Mitsubishi PA-10 medical robotics arm and demonstrates feasibility for complex high degree-of-freedom robots.

Operational Space Control: A Theoretical and Empirical Comparison

An extensive empirical results demonstrate that one of the simplified acceleration-based approaches can be advantageous in terms of task performance, ease of parameter tuning, and general robustness and compliance in the face of inevitable modeling errors.

Learning force control policies for compliant manipulation

This work presents an approach to acquiring manipulation skills on compliant robots through reinforcement learning, and uses the Policy Improvement with Path Integrals (PI2) algorithm to learn these force/torque profiles by optimizing a cost function that measures task success.

Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks

It is shown that VICES improves sample efficiency, maintains low energy consumption, and ensures safety across all three experimental setups, and RL policies learned with VICES can transfer across different robot models in simulation, and from simulation to real for the same robot.

Reinforcement Learning on Variable Impedance Controller for High-Precision Robotic Assembly

This paper explicitly considers incorporating operational space force/torque information into reinforcement learning; this is motivated by humans heuristically mapping perceived forces to control actions, which results in completing high-precision tasks in a fairly easy manner.

Fundamental Challenges in Deep Learning for Stiff Contact Dynamics

  • Mihir ParmarMathew HalmMichael Posa
  • Biology
    2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2021
Emp empirical evidence that learning an accurate model in the first place can be confounded by contact is presented, as modern deep learning approaches are not designed to capture this non-smoothness of contact.

Sim-to-Real Transfer for Biped Locomotion

This work presents a new approach for transfer of dynamic robot control policies such as biped locomotion from simulation to real hardware and uses Bayesian Optimization to determine the values for $\eta$ that optimize the performance of PUP on the real hardware.

Deep Lagrangian Networks for end-to-end learning of energy-based control for under-actuated systems

The resulting DeLaN for energy control (DeLaN 4EC) is the first model learning approach using generic function approximation that is capable of learning energy control because existing approaches cannot learn the system energies directly.

Learning Robotic Manipulation through Visual Planning and Acting

This work learns to imagine goal-directed object manipulation directly from raw image data of self-supervised interaction of the robot with the object, and shows that separating the problem into visual planning and visual tracking control is more efficient and more interpretable than alternative data-driven approaches.

Benchmarking Reinforcement Learning Algorithms on Real-World Robots

This work introduces several reinforcement learning tasks with multiple commercially available robots that present varying levels of learning difficulty, setup, and repeatability and test the learning performance of off-the-shelf implementations of four reinforcement learning algorithms and analyzes sensitivity to their hyper-parameters to determine their readiness for applications in various real-world tasks.