• Corpus ID: 245424659

Towards Disturbance-Free Visual Mobile Manipulation

@article{Ni2021TowardsDV,
  title={Towards Disturbance-Free Visual Mobile Manipulation},
  author={Tianwei Ni and Kiana Ehsani and Luca Weihs and Jordi Salvador},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.12612}
}
Embodied AI has shown promising results on an abundance of robotic tasks in simulation, including visual navigation and manipulation. The prior work generally pursues high success rates with shortest paths while largely ignoring the problems caused by collision during interaction. This lack of prioritization is understandable: in simulated environments there is no inherent cost to breaking virtual objects. As a result, well-trained agents frequently have catastrophic collision with objects… 

Figures and Tables from this paper

Multi-skill Mobile Manipulation for Object Rearrangement

TLDR
This work proposes that the manipulation skills should include mobility to have accessibility in interacting with the target object from multiple locations and at the same time the navigation skill could have multiple end points which lead to successful manipulation, and operationalizes these ideas by implementing mobile manipulation skills rather than stationary ones.

ProcTHOR: Large-Scale Embodied AI Using Procedural Generation

TLDR
The proposed PROCTHOR, a framework for procedural generation of Embodied AI environments, enables us to sample arbitrarily large datasets of diverse, interactive, customizable, and performant virtual environments to train and evaluate embodied agents across navigation, interaction, and manipulation tasks.

References

SHOWING 1-10 OF 95 REFERENCES

ManipulaTHOR: A Framework for Visual Object Manipulation

TLDR
This work proposes a framework for object manipulation built upon the physics-enabled, visually rich AI2-THOR framework and presents a new challenge to the Embodied AI community known as ArmPointNav, which extends the popular point navigation task to object manipulation and offers new challenges including 3D obstacle avoidance.

Visual navigation with obstacle avoidance

TLDR
A framework for visual navigation with obstacle avoidance is presented and validated, guaranteeing that collision avoidance and navigation are achieved simultaneously by actuating the camera pan angle, in the presence of obstacles, to maintain scene visibility as the robot circumnavigates the obstacle.

RoboTHOR: An Open Simulation-to-Real Embodied AI Platform

TLDR
RoboTHOR offers a framework of simulated environments paired with physical counterparts to systematically explore and overcome the challenges of simulation-to-real transfer, and a platform where researchers across the globe can remotely test their embodied models in the physical world.

Spatial Action Maps for Mobile Manipulation

TLDR
This work presents "spatial action maps," in which the set of possible actions is represented by a pixel map (aligned with the input image of the current state), where each pixel represents a local navigational endpoint at the corresponding scene location.

Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning

TLDR
This work presents a decentralized sensor-level collision avoidance policy for multi-robot systems, which directly maps raw sensor measurements to an agent's steering commands in terms of movement velocity and demonstrates that the learned policy can be well generalized to new scenarios that do not appear in the entire training period.

A Collision Avoidance Method Based on Deep Reinforcement Learning

TLDR
This research aims to determine whether a deep reinforcement learning-based collision avoidance method is superior to the traditional methods, such as potential field-based methods and dynamic window approach.

BADGR: An Autonomous Self-Supervised Learning-Based Navigation System

TLDR
The reinforcement learning approach, which the authors call BADGR, is an end-to-end learning-based mobile robot navigation system that can be trained with autonomously-labeled off-policy data gathered in real-world environments, without any simulation or human supervision.

A depth space approach to human-robot collision avoidance

TLDR
A fast method to evaluate distances between the robot and possibly moving obstacles (including humans), based on the concept of depth space, is used to generate repulsive vectors that are used to control the robot while executing a generic motion task.

Collision Avoidance in Pedestrian-Rich Environments With Deep Reinforcement Learning

TLDR
This work develops an algorithm that learns collision avoidance among a variety of heterogeneous, non-communicating, dynamic agents without assuming they follow any particular behavior rules and extends the previous work by introducing a strategy using Long Short-Term Memory (LSTM) that enables the algorithm to use observations of an arbitrary number of other agents.

Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation

TLDR
It is shown that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations.
...