Corpus ID: 235098920

No-frills Dynamic Planning using Static Planners

  title={No-frills Dynamic Planning using Static Planners},
  author={Mara Levy and Vasista Ayyagari and Abhinav Shrivastava},
In this paper, we address the task of interacting with dynamic environments where the changes in the environment are independent of the agent. We study this through the context of trapping a moving ball with a UR5 robotic arm. Our key contribution is an approach to utilize a static planner for dynamic tasks using a Dynamic Planning addon; that is, if we can successfully solve a task with a static target, then our approach can solve the same task when the target is moving. Our approach has three… Expand

Figures from this paper


Trajectory planning for optimal robot catching in real-time
Evaluations indicate that the presented method is highly efficient in complex tasks such as ball-catching, which can be formulated as a non-linear optimization problem where the desired trajectory is encoded by an adequate parametric representation. Expand
Catching Objects in Flight
This work proposes a new methodology to find a feasible catching configuration in a probabilistic manner and uses the dynamical systems approach to encode motion from several demonstrations, which enables a rapid and reactive adaptation of the arm motion in the presence of sensor uncertainty. Expand
Online optimal trajectory generation for robot table tennis
A new trajectory generation framework for robotic table tennis that does not involve a fixed hitting plane is introduced and a free-time optimal control approach is used to derive two different trajectory optimizers, Focused Player and Defensive Player, which encode two different play-styles. Expand
Learning Visual Predictive Models of Physics for Playing Billiards
This paper explores how an agent can be equipped with an internal model of the dynamics of the external world, and how it can use this model to plan novel actions by running multiple internal simulations ("visual imagination"). Expand
Deep visual foresight for planning robot motion
This work develops a method for combining deep action-conditioned video prediction models with model-predictive control that uses entirely unlabeled training data and enables a real robot to perform nonprehensile manipulation — pushing objects — and can handle novel objects not seen during training. Expand
Visual Reaction: Learning to Play Catch With Your Drone
The results show that the model that integrates a forecaster with a planner outperforms a set of strong baselines that are based on tracking as well as pure model-based and model-free RL baselines. Expand
Continuous control with deep reinforcement learning
This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs. Expand
Robot Catching: Towards Engaging Human-Humanoid Interaction
A catching behavior between a person and a robot is described, and ball-hand impact predictions based on the flight of the ball, and human-like motion trajectories to move the hand to the catch position are generated. Expand
Hindsight Experience Replay
A novel technique is presented which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering and may be seen as a form of implicit curriculum. Expand
Learning to Act by Predicting the Future
The presented approach utilizes a high-dimensional sensory stream and a lower-dimensional measurement stream that provides a rich supervisory signal, which enables training a sensorimotor control model by interacting with the environment. Expand