Object manifold learning with action features for active tactile object recognition


In this paper, we consider an object recognition problem based on tactile information using a robot hand. The robot performs an exploratory action to the object to obtain the tactile information, however, poorly designed actions may not be sufficiently informative. In contrast, if we could collect sample data by sequentially performing informative actions, i.e., active learning, the required time would be drastically reduced. To this end, we propose a novel approach for active tactile object recognition. Our approach combines both an active learning scheme and a nonlinear dimensionality reduction method. We first extracts the object manifold, each coordinate of which represents an object, from tactile sensor data and action features using Gaussian Process Latent Variable Models. At the same time, a probabilistic model of the observed data related to the action and the object are learned. Then, with the learned model, optimally-informative exploratory actions can be computed sequentially, and performed to efficiently collect the data for recognition. We show experimental results that verify the effectiveness of our proposed method with synthetic data and a real robot.

DOI: 10.1109/IROS.2014.6942622

Extracted Key Phrases

10 Figures and Tables

Citations per Year

Citation Velocity: 6

Averaging 6 citations per year over the last 3 years.

Learn more about how we calculate this metric in our FAQ.

Cite this paper

@article{Tanaka2014ObjectML, title={Object manifold learning with action features for active tactile object recognition}, author={Daisuke Tanaka and Takamitsu Matsubara and Kentaro Ichien and Kenji Sugimoto}, journal={2014 IEEE/RSJ International Conference on Intelligent Robots and Systems}, year={2014}, pages={608-614} }