• Corpus ID: 14147627

Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control

@article{Zhang2015TowardsVD,
  title={Towards Vision-Based Deep Reinforcement Learning for Robotic Motion Control},
  author={Fangyi Zhang and J. Leitner and Michael Milford and Ben Upcroft and Peter Corke},
  journal={ArXiv},
  year={2015},
  volume={abs/1511.03791}
}
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was… 

Figures and Tables from this paper

Leveraging Deep Reinforcement Learning for Reaching Robotic Tasks
  • K. Katyal, I.-J. Wang, P. Burlina
  • Computer Science, Psychology
    2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2017
TLDR
A manipulation policy is learned which takes the first steps toward generalizing to changes in the environment and can scale and adapt to new manipulators.
3D Simulation for Robot Arm Control with Deep Q-Learning
TLDR
This work presents an approach which uses 3D simulations to train a 7-DOF robotic arm in a control task without any prior knowledge, and presents preliminary results in direct transfer of policies over to a real robot, without any further training.
Vision-Based Deep Reinforcement Learning For UR5 Robot Motion Control
TLDR
The results indicate that the vision-based DRL method proposed in the paper can successfully learn the reaching-task skill and the utilization of asymmetric actor-critic structure and auxiliary-task objective can improve the learning efficiency and the final performance of the D RL method effectively.
One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay
TLDR
This work presents a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot that leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation.
Towards a Sample Efficient Reinforcement Learning Pipeline for Vision Based Robotics
TLDR
This paper studies how to limit the time taken for training a robotic arm with 6 Degrees Of Freedom (DOF) to reach a ball from scratch by assembling a pipeline as efficient as possible.
A Validation Approach for Deep Reinforcement Learning of a Robotic Arm in a 3D Simulated Environment
TLDR
The training of a robot in a simulation environment is concerned by designing a Deep Q-Network that elaborates images acquired by an RGB vision sensor inside a 3D simulated environment and outputs a value for each action the robotic arm can execute given the current state.
Vision-based deep reinforcement learning to control a manipulator
TLDR
This paper considers a task for the end-effector to reach a random target and proposes a new approach using a vision-based direction vector, which has low dimension and can be simply implemented.
Vision-Based Reaching Using Modular Deep Networks: from Simulation to the Real World
TLDR
A deep network architecture that maps visual input to control actions for a robotic planar reaching task with 100% reliability in real-world trials is described.
Towards Lifelong Self-Supervision: A Deep Learning Direction for Robotics
TLDR
This manuscript surveys recent work in the literature that pertain to applying deep learning systems to the robotics domain, either as means of estimation or as a tool to resolve motor commands directly from raw percepts and suggests that deep learning as a tools alone is insufficient in building a unified framework to acquire general intelligence.
...
...

References

SHOWING 1-10 OF 27 REFERENCES
End-to-End Training of Deep Visuomotor Policies
TLDR
This paper develops a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors, trained using a partially observed guided policy search method, with supervision provided by a simple trajectory-centric reinforcement learning method.
Playing Atari with Deep Reinforcement Learning
TLDR
This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them.
Reinforcement Learning in Robotics: Applications and Real-World Challenges
TLDR
A summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations is given.
Curiosity driven reinforcement learning for motion planning on humanoids
TLDR
This work embodies a curious agent in the complex iCub humanoid robot, the first ever embodied, curious agent for real-time motion planning on a humanoid, and demonstrates that it can learn compact Markov models to represent large regions of the iCub's configuration space.
Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots
TLDR
A model of the human upper body is created to simulate the reproduction of dual-arm movements and generate natural-looking joint configurations from tracked hand paths and shows how HMM can be used to detect temporal dependencies between both arms in dual- arm tasks.
Learning contact-rich manipulation skills with guided policy search
TLDR
This paper extends a recently developed policy search method and uses it to learn a range of dynamic manipulation behaviors with highly general policy representations, without using known models or example demonstrations, and shows that this method can acquire fast, fluent behaviors after only minutes of interaction time.
Human-level control through deep reinforcement learning
TLDR
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Reinforcement Learning for Humanoid Robotics
TLDR
This paper discusses different approaches of reinforcement learning in terms of their applicability in humanoid robotics, and demonstrates that ‘vanilla’ policy gradient methods can be significantly improved using the natural policy gradient instead of the regular policy gradient.
Vision for robotic object manipulation in domestic settings
Survey on Visual Servoing for Manipulation
TLDR
The proposed terminology is used to introduce a young researcher and lead the experts in the field through a three decades long historical field of vision guided robotics.
...
...