Learn More
This paper presents a deep learning architecture for detecting the palm and fingertip positions of stable grasps directly from partial object views. The architecture is trained using RGBD image patches of fingertip and palm positions from grasps computed on complete object models using a grasping simulator. At runtime, the architecture is able to estimate(More)
— This work provides an architecture to enable robotic grasp planning via shape completion. Shape completion is accomplished through the use of a 3D convolutional neural network (CNN). The network is trained on our own new open source dataset of over 440,000 3D exemplars captured from varying viewpoints. At runtime, a 2.5D pointcloud captured from a single(More)
  • 1