Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints

@article{Zhu2022LearningTS,
  title={Learning to Synthesize Volumetric Meshes from Vision-based Tactile Imprints},
  author={Xinghao Zhu and Siddarth Jain and Masayoshi Tomizuka and Jeroen van Baar},
  journal={2022 International Conference on Robotics and Automation (ICRA)},
  year={2022},
  pages={4833-4839}
}
Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts. Obtaining accurate volumetric meshes for the deformed elastomer can provide direct contact information and benefit robotic grasping and manipulation. This paper focuses on learning to synthesize the volumetric mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors. Synthetic image-mesh pairs and real… 

Figures and Tables from this paper

Synthesizing and Simulating Volumetric Meshes from Vision-based Tactile Imprints

This work focuses on learning to simulate and synthesize the volumetric mesh of the elastomer based on the image imprints acquired from tactile sensors, proposing a train-then-adapt way to leverage synthetic image-mesh pairs and real-world images from finite element methods (FEM) and physical sensors.

Allowing Safe Contact in Robotic Goal-Reaching: Planning and Tracking in Operational and Null Spaces

It is shown that allowing safe contact improves goal-reachingency and provides feasible solutions in highly collisional scenarios where collision-free constraints cannot be enforced and that planning in null space, in addition to operational space, improves trajectory safety.

References

SHOWING 1-10 OF 38 REFERENCES

Ground Truth Force Distribution for Learning-Based Tactile Sensing: A Finite Element Approach

The method presented in this article uses a finite element model to obtain ground truth data for the three-dimensional force distribution and is applied to a vision-based tactile sensor, which aims to reconstruct the contact force distribution purely from images.

Sim-to-Real for Robotic Tactile Sensing via Physics-Based Simulation and Learned Latent Projections

This work develops an efficient, freely-accessible FEM model of the BioTac and comprises one of the first efforts to combine self-supervision, cross-modal transfer, and sim-to-real transfer for tactile sensors.

Generation of GelSight Tactile Images for Sim2Real Learning

A novel approach for simulating a GelSight tactile sensor in the commonly used Gazebo simulator that can indirectly sense forces, geometry, texture and other properties of the object and enables Sim2Real learning with tactile sensing.

From Pixels to Percepts: Highly Robust Edge Perception and Contour Following Using Deep Learning and an Optical Biomimetic Tactile Sensor

This letter applies deep learning to an optical biomimetic tactile sensor, the TacTip, which images an array of papillae inside its sensing surface analogous to structures within human skin, to show that the application of a deep convolutional neural network can give reliable edge perception and, thus, a robust policy for planning contact points to move around object contours.

Tactile Mapping and Localization from High-Resolution Tactile Imprints

Results show that by exploiting the dense tactile information, this work can reconstruct the shape of objects with high accuracy and do on-line object identification and localization, opening the door to reactive manipulation guided by tactile sensing.

GelSight: High-Resolution Robot Tactile Sensors for Estimating Geometry and Force

This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design, and introduces the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions.

GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger

This work describes the development of a high-resolution tactile-sensing finger for robot grasping that features an integration that is slimmer, more robust, and with more homogeneous output than previous vision-based tactile sensors.

Interpreting and Predicting Tactile Signals via a Physics-Based and Data-Driven Framework

This research extends robotic tactile sensing beyond reduced-order models through the automated creation of a precise tactile dataset for the BioTac over diverse physical interactions, and neural-network-based mappings from rawBioTac signals to low-dimensional experimental data, and more importantly, high-density FE deformation fields.

Dense Tactile Force Estimation using GelSlim and inverse FEM

In this paper, we present a new version of tactile sensor GelSlim 2.0 with the capability to estimate the contact force distribution in real time. The sensor is vision-based and uses an array of

Tactile Object Pose Estimation from the First Touch with Geometric Contact Rendering

An approach to tactile pose estimation from the first touch for known objects, which provides high accuracy pose estimations from distinctive tactile observations while regressing pose distributions to account for those contact shapes that could result from different object poses.