Norikazu Sugimoto

Learn More
Many evidences suggest that the central nervous system (CNS) acquires and switches internal models for adaptive control in various environments. However, little is known about the neural mechanisms responsible for the switching. A recent computational model for simultaneous learning and switching of internal models proposes two separate switching(More)
In this study, we propose an extension of the MOSAIC architecture to control real humanoid robots. MOSAIC was originally proposed by neuroscientists to understand the human ability of adaptive control. The modular architecture of the MOSAIC model can be useful for solving nonlinear and non-stationary control problems. Both humans and humanoid robots have(More)
— Learning about new objects that a robot sees for the first time is a difficult problem because it is not clear how to define the concept of object in general terms. In this paper we consider as objects those physical entities that are comprised of features which move consistently when the robot acts upon them. Among the possible actions that a robot could(More)
— Direct transfer of human motion trajectories to humanoid robots does not result in dynamically stable robot movements due to the differences in human and humanoid robot kinematics and dynamics. We developed a system that converts human movements captured by a low-cost RGB-D camera into dynamically stable humanoid movements. The transfer of human movements(More)
Reinforcement learning (RL) can provide a basic framework for autonomous robots to learn to control and maximize future cumulative rewards in complex environments. To achieve high performance, RL controllers must consider the complex external dynamics for movements and task (reward function) and optimize control commands. For example, a robot playing tennis(More)