DCNN for Tactile Sensory Data Classification based on Transfer Learning

  title={DCNN for Tactile Sensory Data Classification based on Transfer Learning},
  author={Mohamad Gabriel Alameh and Ali Ibrahim and Maurizio Valle and Gabriele Moser},
  journal={2019 15th Conference on Ph.D Research in Microelectronics and Electronics (PRIME)},
  • M. Alameh, A. Ibrahim, Gabriele Moser
  • Published 15 July 2019
  • Computer Science
  • 2019 15th Conference on Ph.D Research in Microelectronics and Electronics (PRIME)
Tactile data processing and analysis is still essentially an open challenge. In this framework, we demonstrate a method to achieve touch modality classification using pre-trained convolutional neural networks (CNNs). The 3D tensorial tactile data generated by real human interactions on an electronic skin (E-Skin) are transformed into 2D images. Using a transfer learning approach formalized through a CNN, we address the challenging task of the recognition of the object that was touched by the E… 

Figures from this paper

Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition
This work proposes a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone, chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify bothVisual and tactile data.
A Novel Bilinear Feature and Multi-Layer Fused Convolutional Neural Network for Tactile Shape Recognition
A bilinear feature and multi-layer fused convolutional neural network (BMF-CNN) that can deal with tactile shapes more effectively than traditional CNN and artificial feature methods.
Smart Tactile Sensing Systems Based on Embedded CNN Implementations
This paper presents and compares the implementations of a convolutional neural network model for tactile data decoding on various hardware platforms and achieves a time inference of 1.2 ms while consuming around 900 μJ.
Touch Modality Classification Using Recurrent Neural Networks
This paper investigates the time series characteristics of RNNs to classify touch modalities represented as spatio temporal 3D tensor data to provide efficient hardware-friendly touch modality classification approaches suitable for embedded applications.
Low-Cost FMCW Radar Human-Vehicle Classification Based on Transfer Learning
A human-vehicle classification method based on Transfer Learning is proposed by processing the R-D maps generated by a low-cost short range 24 GHz FMCW radar with a convolutional Neural Network (CNN).
Energy Efficient Implementation of Machine Learning Algorithms on Hardware Platforms
An overview about state of the art techniques enabling efficient implementation of Machine and Deep learning (ML/DL) algorithms aiming to improve the energy efficiency is presented and an assessment of the algorithms suitable for embedded implementation is provided.


Humanoids learn touch modalities identification via multi-modal robotic skin and robust tactile descriptors
A set of biologically inspired feature descriptors are proposed to provide robust and abstract tactile information for use in touch classification, demonstrated to be invariant to location of contact and movement of the humanoid, as well as capable of processing single and multi-touch actions.
Human and object recognition with a high-resolution tactile sensor
It is discussed how feature-extraction based on SURF can be obtained up to five times faster compared to DCNN, and the accuracy achieved using DCNN-based feature extraction can be 11.67% superior to SURF.
Active Clothing Material Perception Using Tactile Sensing and Deep Learning
This work proposes a new framework for active tactile perception system with vision-touch system, and has potential to enable robots to help humans with varied clothing related housework.
Extreme Kernel Sparse Learning for Tactile Object Recognition
To tackle the intrinsic difficulties which are introduced by the representer theorem, a reduced kernel dictionary learning method is developed by introducing row-sparsity constraint and a globally convergent algorithm is developed to solve the optimization problem.
Computational Intelligence Techniques for Tactile Sensing Systems
The research applies novel computational intelligence techniques and a tensor-based approach for the classification of touch modalities and results consist in providing a procedure to enhance system generalization ability and architecture for multi-class recognition applications.
Interpretation of the modality of touch on an artificial arm covered with an EIT-based sensitive skin
A thin, flexible and stretchable artificial skin for robotics based on the principle of electrical impedance tomography is presented, shown that features based on touch duration and intensity are sufficient to provide a good classification of touch modality.
An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition
This work aims to develop a neuromorphic system for tactile pattern recognition and confirms that there exists a tradeoff between response time and classification accuracy, and substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
Multi-column deep neural networks for image classification
On the very competitive MNIST handwriting benchmark, this method is the first to achieve near-human performance and improves the state-of-the-art on a plethora of common image classification benchmarks.
Dermatologist-level classification of skin cancer with deep neural networks
This work demonstrates an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists, trained end-to-end from images directly, using only pixels and disease labels as inputs.