Learn More
We propose an integrated technique of genetic programming (GP) and reinforcement learning (RL) that allows a real robot to execute real-time learning. Our technique does not need a precise simulator because learning is done with a real robot. Moreover, our technique makes it possible to learn optimal actions in real robots. We show the result of an(More)
The cooperation of several robots is needed for complex tasks. The cooperation methods for multiple robots generally require exact goal or sub-goal positions. However, it is difficult to direct the goal or sub-goal positions to multiple robots for the sake of cooperation with each other. Planning algorithms reduce the burden for this purpose. In this paper,(More)
The multi-agent cooperation has been proved useful in executing many complex tasks. In our previous paper, we proposed a path planning algorithm based on a random sampling for the sake of the multi-agent cooperation. However, the action path of the robots is liable to be deviated by the noise in the real world. Thus, some correction mechanism is required to(More)
To execute a task consisting of multiple subtasks using a robot, we need to determine the subtask execution order. We investigated the problem of cooperative object transport by multiple robots in a previous study. This is a task in which multiple robots carry an object to a specified goal by passing the object between each other. This paper proposes two(More)
We have been studying the techniques for evolutionary robotics and experimenting with various robots applied evolutionary methods. We have paid special attentions to real robots and multi-agent problems with them. In this research domain, we named them as " Ingeniously Behaving Agents (IBA) ". This paper shows several techniques developed in our IBA(More)
We introduce a technique that allows a real robot to execute real-time learning, in which GP and RL are integrated. In our former research, we showed the result of an experiment with a real robot " AIBO " and proved the technique performed better than the traditional Q-learning method. Based on the proposed technique , we can acquire the common programs(More)
  • 1