Intercepting A Flying Target While Avoiding Moving Obstacles: A Unified Control Framework With Deep Manifold Learning

  title={Intercepting A Flying Target While Avoiding Moving Obstacles: A Unified Control Framework With Deep Manifold Learning},
  author={Apan Dastider and Mingjie Lin},
—Real-time interception of a fast-moving object by a robotic arm in cluttered environments filled with static or dynamic obstacles permits only tens of milliseconds for reaction times, hence quite challenging and arduous for state-of-the-art robotic planning algorithms to perform multiple robotic skills, for instance, catching the dynamic object and avoiding obstacles, in parallel. This paper proposes an unified frame- work of robotic path planning through embedding the high-dimensional temporal… 

Figures from this paper



Catching Objects in Flight

This work proposes a new methodology to find a feasible catching configuration in a probabilistic manner and uses the dynamical systems approach to encode motion from several demonstrations, which enables a rapid and reactive adaptation of the arm motion in the presence of sensor uncertainty.

Robot Motion Planning in Learned Latent Spaces

L-SBMP is presented, a methodology toward computing motion plans for complex robotic systems by learning a plannable latent representation through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP.

Learning Riemannian Manifolds for Geodesic Motion Skills

This work proposes to learn a Riemannian manifold from human demonstrations on which geodesics are natural motion skills, and realizes this with a variational autoencoder (VAE) over the space of position and orientations of the robot end-effector.

How Fast Is Too Fast? The Role of Perception Latency in High-Speed Sense and Avoid

This is the first theoretical work in which perception and actuation limitations are jointly considered to study the performance of a robotic platform in high-speed navigation.

Robot Anticipation Learning System for Ball Catching

The results show that the information fused from both throwing and flying motions improves the ball-catching rate by up to 20% compared to the baseline approach, with the predictions relying only on the information acquired during the flight phase.

Obstacle Avoidance and Tracking Control of Redundant Robotic Manipulator: An RNN-Based Metaheuristic Approach

A metaheuristic-based control framework, called beetle antennae olfactory recurrent neural network, for simultaneous tracking control and obstacle avoidance of a redundant manipulator and simulations results using an LBR IIWA seven-DOF manipulator are presented.

Motion planning with diffusion maps

Many robotic applications require repeated, on-demand motion planning in mapped environments. In addition, the presence of other dynamic agents, such as people, often induces frequent, dynamic

Using Diffusion Map for Visual Navigation of a Ground Robot

This paper finds the ground robot’s position and orientation as a function of coordinates of the robot image on the low-dimensional manifold obtained from the diffusion map, which has higher accuracy and is not sensitive to changes in lighting, the appearance of external moving objects, and other phenomena.

Dynamic Obstacle Avoidance Algorithm for Robot Arm Based on Deep Reinforcement Learning

  • Xiaowei ChengShan Liu
  • Computer Science
    2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS)
  • 2022
A new state space description method suitable for manipulators and dynamic environments, and the corresponding collision detection method and reward value calculation function for this state description method is designed.

Dynamic Diffusion Maps-based Path Planning for Real-time Collision Avoidance of Mobile Robots

This paper proposes a path planning algorithm that plans a local path by utilizing a receding horizon approach, and therefore the algorithm repeats planning at every sample time, and does not have to carry a prior map all the time.