• Corpus ID: 227118806

Learning Synthetic to Real Transfer for Localization and Navigational Tasks

  title={Learning Synthetic to Real Transfer for Localization and Navigational Tasks},
  author={Maxime Pietrantoni and Boris Chidlovskii and Tomi Silander},
Autonomous navigation consists in an agent being able to navigate without human intervention or supervision, it affects both high level planning and low level control. Navigation is at the crossroad of multiple disciplines, it combines notions of computer vision, robotics and control. This work aimed at creating, in a simulation, a navigation pipeline whose transfer to the real world could be done with as few efforts as possible. Given the limited time and the wide range of problematic to be… 


SnapNav: Learning Mapless Visual Navigation with Sparse Directional Guidance and Visual Reference
Deep neural network based visual navigation system, SnapNav is proposed, achieving a highly autonomous navigation ability compared to baseline models, enabling sparse, map-less navigation in previously unseen environments.
Cognitive Mapping and Planning for Visual Navigation
The Cognitive Mapper and Planner is based on a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the task, and a spatial memory with the ability to plan given an incomplete set of observations about the world.
Target-driven visual navigation in indoor scenes using deep reinforcement learning
This paper proposes an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization and proposes the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine.
Learning to Explore using Active Neural SLAM
This work presents a modular and hierarchical approach to learn policies for exploring 3D environments, called `Active Neural SLAM'. Our approach leverages the strengths of both classical and
Neural Topological SLAM for Visual Navigation
This paper designs topological representations for space that effectively leverage semantics and afford approximate geometric reasoning, and describes supervised learning-based algorithms that can build, maintain and use such representations under noisy actuation.
Vision-and-Language Navigation: Interpreting Visually-Grounded Navigation Instructions in Real Environments
This work provides the first benchmark dataset for visually-grounded natural language navigation in real buildings - the Room-to-Room (R2R) dataset and presents the Matter-port3D Simulator - a large-scale reinforcement learning environment based on real imagery.
Learning to Navigate in Complex Environments
This work considers jointly learning the goal-driven reinforcement learning problem with auxiliary depth prediction and loop closure classification tasks and shows that data efficiency and task performance can be dramatically improved by relying on additional auxiliary tasks leveraging multimodal sensory inputs.
Sonar-Based Real-World Mapping and Navigation
A sonar-based mapping and navigation system developed for an autonomous mobile robot operating in unknown and unstructured environments is described. The system uses sonar range data to build a
Integrating Topological and Metric Maps for Mobile Robot Navigation: A Statistical Approach
This paper poses the mapping problem as a statistical maximum likelihood problem, and devises an efficient algorithm for search in likelihood space that integrates two phases: a topological and a metric mapping phase.
Semi-parametric Topological Memory for Navigation
A new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals, that consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a deep network capable of retrieving nodes from the graph based on observations.