Neural Network Memory Architectures for Autonomous Robot Navigation
@article{Chen2017NeuralNM, title={Neural Network Memory Architectures for Autonomous Robot Navigation}, author={Steven W. Chen and Nikolay A. Atanasov and Arbaaz Khan and Konstantinos Karydis and Daniel D. Lee and Vijay R. Kumar}, journal={ArXiv}, year={2017}, volume={abs/1705.08049} }
Author(s): Chen, Steven; Atanasov, Nikolay; Khan, Arbaaz; Karydis, Konstantinos; Lee, David; Kumar, Vijay
7 Citations
Learning Sample-Efficient Target Reaching for Mobile Robots
- Computer Science2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- 2018
A novel architecture and a self-supervised policy gradient algorithm, which employs unsupervised auxiliary tasks to enable a mobile robot to learn how to navigate to a given goal in a sample efficient manner.
Learning to Actively Reduce Memory Requirements for Robot Control Tasks
- Computer ScienceL4DC
- 2021
This work presents a reinforcement learning framework that leverages an implementation of the group LASSO regularization to synthesize policies that employ low-dimensional and task-centric memory representations and actively reduce memory requirements.
Memory Augmented Control Networks
- Computer ScienceICLR
- 2018
It is shown that the Memory Augmented Control Network learns to plan and can generalize to new environments and is evaluated in discrete grid world environments for path planning in the presence of simple and complex obstacles.
Vision-Guided MPC for Robotic Path Following Using Learned Memory-Augmented Model
- Computer ScienceFrontiers in Robotics and AI
- 2021
This work presents an end-to-end framework for trajectory-independent robotic path following for contact-rich tasks in the presence of parametric uncertainties, and introduces the application of the differentiable neural computer, a type of memory augmented neural network (MANN).
Learning to Imagine Manipulation Goals for Robot Task Planning
- Computer ScienceArXiv
- 2017
This work learns a neural net that encodes the k most likely outcomes from high level actions from a given world, and creates comprehensible task plans that allow us to predict changes to the environment many time steps into the future.
Learning Long-term Dependencies with Deep Memory States
- Computer Science
- 2017
A reinforcement learning method is proposed that addresses the limitations of truncated BPTT by using a learned critic to estimate truncated gradients and by saving and loading hidden states outputted by recurrent neural networks.
CRLB Analysis for a Robust TRN Based on a Combination of RNN and PF
- Computer ScienceInternational Journal of Aeronautical and Space Sciences
- 2019
A robust PF-based TRN was designed, which uses a recurrent neural network (RNN)-based deep learning method to function on flat and repetitive terrains and a Cramér–Rao lower bound (CRLB) analysis was performed to evaluate how close the proposed method was to the optimal design.
References
SHOWING 1-10 OF 45 REFERENCES
Probabilistic robotics
- Computer ScienceCACM
- 2002
This research presents a novel approach to planning and navigation algorithms that exploit statistics gleaned from uncertain, imperfect real-world environments to guide robots toward their goals and around obstacles.
Neural Turing Machines
- Computer ScienceArXiv
- 2014
A combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent.
Robotic mapping: a survey
- Mathematics
- 2003
This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently…
From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots
- Computer Science2017 IEEE International Conference on Robotics and Automation (ICRA)
- 2017
This work presents the first approach that learns a target-oriented end-to-end navigation model for a robotic platform, and demonstrates that the learned navigation model is directly transferable to previously unseen virtual and, more interestingly, real-world environments.
Asynchronous Methods for Deep Reinforcement Learning
- Computer ScienceICML
- 2016
A conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers and shows that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.
Hybrid computing using a neural network with dynamic external memory
- Computer ScienceNature
- 2016
A machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer.
Memory-based control with recurrent neural networks
- Computer ScienceArXiv
- 2015
This work extends two related, model-free algorithms for continuous control to solve partially observed domains using recurrent neural networks trained with backpropagation through time to find that recurrent deterministic and stochastic policies are able to learn similarly good solutions to these tasks, including the water maze where the agent must learn effective search strategies.
Value Iteration Networks
- Computer ScienceNIPS
- 2016
This work introduces the value iteration network (VIN), a fully differentiable neural network with a `planning module' embedded within that shows that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.
Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search
- Computer Science2016 IEEE International Conference on Robotics and Automation (ICRA)
- 2016
This work proposes to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment, and a deep neural network policy is trained, which can successfully control the robot without knowledge of the full state.
Exact robot navigation using artificial potential functions
- Computer ScienceIEEE Trans. Robotics Autom.
- 1992
A methodology for exact robot motion planning and control that unifies the purely kinematic path planning problem with the lower level feedback controller design is presented. Complete information…