Learn More
The aim of the Cyber Rodent project is to understand the origins of our reward and affective systems by building artificial agents that share the same intrinsic constraints as natural agents: Self-preservation and self-reproduction. A Cyber Rodent is a robot that can search for and recharge from battery packs on the floor and copy its programs to a nearby(More)
Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asyn-chronous and autonomous properties of biological evolution. The evaluation, selection and reproduction are carried out by and between the robots, without any need for human intervention. In this paper we propose a biologically inspired embodied evolution(More)
We have to prepare the evaluation (fitness) function to evaluate the performance of the robot when we apply the machine learning techniques to the robot application. In many cases, the fitness function is composed of several aspects. Simple implementation to cope with the multiple fitness functions is a weighted summation. This paper presents an adaptive(More)
Soccer-playing robots can develop skills based on the success or failure of previous behavior, and skill-development is enhanced when all team members adopt successful behavior. ABSTRACT | Coevolution has been receiving increased attention as a method for simultaneously developing the control structures of multiple agents. Our ultimate goal is the mutual(More)
In this paper, we rst discuss the meaning of physical embodiment and the complexity of the environment in the context of multiagent learning. We then propose a vision-based reinforcement learning method that acquires cooperative behaviors in a dynamic environment. We use the robot soccer game initiated by RoboCup [12] to illustrate the eectiveness of our(More)
A method is proposed which accomplishes a whole task consisting of plural subtasks by coordinating multiple behaviors acquired by a vision-based reinforcement learning. First, individual behaviors which achieve the corresponding subtasks are independently acquired by Q-learning, a widely used reinforcement learning method. Each learned behavior can be(More)
Coordination of multiple behaviors independently obtained by a reinforcement learning method is one of the issues in order for the method to be scaled to larger and more complex robot learning tasks. Direct combination of all the state spaces for individual modules (subtasks) needs enormous learning time, and it causes hidden states. This paper presents a(More)
The speed and performance of learning depend on the complexity of the learner. A simple learner with few parameters and no internal states can quickly obtain a reactive policy, but its performance is limited. A learner with many parameters and internal states may finally achieve high performance, but it may take enormous time for learning. Therefore, it is(More)
This paper discusses how a robot can develop its state vector according to the complexity of the interactions with its environment. A method for controlling the complexity is proposed for a vision-based mobile robot of which task is to shoot a ball into a goal avoiding collisions with a goal keeper. First, we provide the most dicult situation (the maximum(More)