Cognitive developmental robotics as a new paradigm for the design of humanoid robots
Vision information processing is important for robots that act in human-interactive environments. In this paper, we propose to acquire visual representation of robot body and object that is suitable for motion learning in a bottom-up manner. An advantage of the proposed framework is that it does not require specific hand-coding depending on the visual properties of objects or the robot. A subtraction technique and SOM are used to compose the state space based on the image with extracted robot body and objects. Motion of the robot is planned based on reachable set. The task of moving an object to a target position is divided into two phases, one to reach a position that is suitable for starting pushing motion and the other to push the object to the target. The proposed method is verified by experiment of pushing manipulation of an object with a robot arm.