Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path

@article{Ota2020EfficientEI,
  title={Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path},
  author={Keita Ota and Yoko Sasaki and Devesh K. Jha and Yusuke Yoshiyasu and Asako Kanezaki},
  journal={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2020},
  pages={6061-6068}
}
  • Keita Ota, Y. Sasaki, +2 authors Asako Kanezaki
  • Published 2020
  • Computer Science, Mathematics
  • 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
In this paper, we consider the problem of building learning agents that can efficiently learn to navigate in constrained environments. The main goal is to design agents that can efficiently learn to understand and generalize to different environments using high-dimensional inputs (a 2D map), while following feasible paths that avoid obstacles in obstacle-cluttered environment. To achieve this, we make use of traditional path planning algorithms, supervised learning, and reinforcement learning… Expand
Deep Reactive Planning in Dynamic Environments
TLDR
Traditional kinematic planning, deep learning, and deep reinforcement learning are combined in a synergistic fashion to generalize to arbitrary environments and allow a robot to learn an end-to-end policy which can adapt to changes in the environment during execution. Expand
Deep Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles
TLDR
This paper presents a deep reinforcement learning-based system for goal-oriented mapless navigation for Unmanned Aerial Vehicles (UAVs) based on two state-of-the-art Deep- Rl techniques for terrestrial robot: Deep Deterministic Policy Gradient and Soft Actor Critic. Expand
Deep Reinforcement Learning for Mapless Navigation of a Hybrid Aerial Underwater Vehicle with Medium Transition
TLDR
It is concluded that Deep-RLbased approaches can be successfully used to perform mapless navigation and obstacle avoidance for HUAUVs. Expand
Deep Reactive Planning in Dynamic Environments /Author=Ota, Kei; Jha, Devesh; Onishi, Tadashi; Kanezaki, Asako; Yoshiyasu, Yusuke; Mariyama, Toshisada; Nikovski, Daniel N. /CreationDate=November 12, 2020 /Subject=Robotics
The main novelty of the proposed approach is that it allows a robot to learn an endtoend policy which can adapt to changes in the environment during execution. While goal conditioning of policies hasExpand
MPC-MPNet: Model-Predictive Motion Planning Networks for Fast, Near-Optimal Planning Under Kinodynamic Constraints
TLDR
This work presents a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that quickly finds near-optimal path solutions with worst-case theoretical guarantees under kinodynamic constraints for practical underactuated systems. Expand
Development of a Basic Educational Kit for Robotic System with Deep Neural Networks
TLDR
A basic educational kit for robotic system development with deep neural networks to educate beginners in both robotics and machine learning, especially the use of DNNs is proposed. Expand

References

SHOWING 1-10 OF 36 REFERENCES
Combining Optimal Control and Learning for Visual Navigation in Novel Environments
TLDR
This work coupling model-based control with learning-based perception produces a series of waypoints that guide the robot to the goal via a collision-free path and demonstrates that the proposed approach can reach goal locations more reliably and efficiently in novel environments as compared to purely geometric mapping-based or end-to-end learning- based alternatives. Expand
GOSELO: Goal-Directed Obstacle and Self-Location Map for Robot Navigation Using Reactive Neural Networks
TLDR
The key concept is to crop, rotate, and rescale an obstacle map according to the goal location and the agent's current location so that the map representation will be better correlated with self-movement in the general navigation task, rather than the layout of the environment. Expand
Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning
  • Michael Everett, Y. Chen, J. How
  • Computer Science, Engineering
  • 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2018
TLDR
This work extends the previous approach to develop an algorithm that learns collision avoidance among a variety of types of dynamic agents without assuming they follow any particular behavior rules, and introduces a strategy using LSTM that enables the algorithm to use observations of an arbitrary number of other agents, instead of previous methods that have a fixed observation size. Expand
Learning Navigation Behaviors End-to-End With AutoRL
TLDR
Empirical evaluations show that AutoRL policies do not suffer from the catastrophic forgetfulness that plagues many other deep reinforcement learning algorithms, generalize to new environments and moving obstacles, are robust to sensor, actuator, and localization noise, and can serve as robust building blocks for larger navigation tasks. Expand
Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning
TLDR
A reinforcement learning-based algorithm for trajectory optimization for constrained dynamical systems, trained with a reference path and parameterize the policies with goal locations, so that the agent can be trained for multiple goals simultaneously. Expand
Hierarchical Reinforcement Learning for Robot Navigation
TLDR
A HRL architecture for learn- ing robot's movements, e.g. robot navigation, which consists of two layers: movement planning and movement execution and which is implemented and evaluated on a mobile robot platform for a navigation task. Expand
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-Based Planning
TLDR
This work presents PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL), and evaluates it on two navigation tasks with non-trivial robot dynamics. Expand
Benchmarking Safe Exploration in Deep Reinforcement Learning
Reinforcement learning (RL) agents need to explore their environments in order to learn optimal policies by trial and error. In many environments, safety is a critical concern and certain errors areExpand
Deeply Informed Neural Sampling for Robot Motion Planning
TLDR
A neural network-based adaptive sampler for motion planning called Deep Sampling-based Motion Planner (DeepSMP), which generates samples for SMPs and enhances their overall speed significantly while exhibiting efficient scalability to higher-dimensional problems. Expand
Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
TLDR
h-DQN is presented, a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning, and allows for flexible goal specifications, such as functions over entities and relations. Expand
...
1
2
3
4
...