• Corpus ID: 245424659

Towards Disturbance-Free Visual Mobile Manipulation

@article{Ni2021TowardsDV,
  title={Towards Disturbance-Free Visual Mobile Manipulation},
  author={Tianwei Ni and Kiana Ehsani and Luca Weihs and Jordi Salvador},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.12612}
}
Deep reinforcement learning has shown promising results on an abundance of robotic tasks in simulation, including visual navigation and manipulation. Prior work generally aims to build embodied agents that solve their assigned tasks as quickly as possible, while largely ignoring the problems caused by collision with objects during interaction. This lack of prioritization is understandable: there is no inherent cost in breaking virtual objects. As a result, “well-trained” agents frequently… 

A General Purpose Supervisory Signal for Embodied Agents

The Scene Graph Contrastive (SGC) loss is proposed, which uses scene graphs as general-purpose, training-only, supervisory signals, and uses contrastive learning to align an agent’s representation with a rich graphical encoding of its environment.

References

SHOWING 1-10 OF 111 REFERENCES

Proximal Policy Optimization Algorithms

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective

AI2-THOR: An Interactive 3D Environment for Visual AI

AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks and facilitate building visually intelligent models.

Auxiliary Tasks Speed Up Learning PointGoal Navigation

This work develops a method to significantly increase sample and time efficiency in learning PointNav using self-supervised auxiliary tasks (e.g. predicting the action taken between two egocentric observations, predicting the distance between two observations from a trajectory, etc.).

Benchmarking Safe Exploration in Deep Reinforcement Learning

This work proposes to standardize constrained RL as the main formalism for safe exploration, and presents the Safety Gym benchmark suite, a new slate of high-dimensional continuous control environments for measuring research progress on constrained RL.

ManipulaTHOR: A Framework for Visual Object Manipulation

This work proposes a framework for object manipulation built upon the physics-enabled, visually rich AI2-THOR framework and presents a new challenge to the Embodied AI community known as ArmPointNav, which extends the popular point navigation task to object manipulation and offers new challenges including 3D obstacle avoidance.

Deep Reinforcement Learning at the Edge of the Statistical Precipice

This paper argues that reliable evaluation in the few-run deep RL regime cannot ignore the uncertainty in results without running the risk of slowing down progress in the field, and advocates for reporting interval estimates of aggregate performance and proposing performance profiles to account for the variability in results.

ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors

The aim is to devise a robotic reinforcement learning system for learning navigation and manipulation together, in an autonomous way without human intervention, enabling continual learning under realistic assumptions.

Spatial Action Maps for Mobile Manipulation

This work presents "spatial action maps," in which the set of possible actions is represented by a pixel map (aligned with the input image of the current state), where each pixel represents a local navigational endpoint at the corresponding scene location.

Learning Mobile Manipulation through Deep Reinforcement Learning

A novel mobile manipulation system which integrates the state-of-the-art deep reinforcement learning algorithms with visual perception is proposed, which has an efficient framework decoupling visual perception from the deep reinforcementLearning control, which enables its generalization from simulation training to real-world testing.

DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames

It is shown that the scene understanding and navigation policies learned can be transferred to other navigation tasks -- the analog of "ImageNet pre-training + task-specific fine-tuning" for embodied AI.
...