Learn More
We describe our technical approach in competing at the RoboCup 2000 Sony legged robot league. The UNSW team won both the challenge competition and all their soccer matches, emerging the outright winners for this league against eleven other international teams. The main advantage that the UNSW team had was speed. The robots not only moved quickly, due to a(More)
Competing at the RoboCup 2000 Sony legged robot league, the UNSW team won both the challenge competition and all their soccer matches, emerging the outright winners for this league against eleven other international teams. The main advantage that the UNSW team had was speed. A major contributor to the speed was a novel omnidirectional locomotion method(More)
A challenge in applying reinforcement learning to large problems is how to manage the explosive increase in storage and time complexity. This is especially problematic in multi-agent systems, where the state space grows exponentially in the number of agents. Function approximation based on simple supervised learning is unlikely to scale to complex domains(More)
This paper presents the CQ algorithm which decomposes and solves a Markov Decision Process (MDP) by automatically generating a hierarchy of smaller MDPs using state variables. The CQ algorithm uses a heuristic which is applicable for problems that can be modelled by a set of state variables that conform to a special ordering, defined in this paper as a "(More)
In 2001, the UNSW United team in the Sony legged robot league successful defended its championship title. The Sony legged robot (ERS-210) has 20 degrees of freedom, including 3 for each of the four legs and 3 for the head. The primary sensor is a colour CMOS camera mounted in the robot's nose. The robot is controlled by a MIPS R4000 processor with 32Mb of(More)
Hierarchical reinforcement learning methods have not been able to simultaneously abstract and reuse subtasks with discounted value functions. The contribution of this paper is to introduce two completion functions that jointly decompose the value function hierarchically to solve this problem. The significance of this result is that the benefits of(More)
—We learn a controller for a flat-footed bipedal robot to optimally respond to both (1) external disturbances caused by, for example, stepping on objects or being pushed, and (2) rapid acceleration, such as reversal of demanded walk direction. The reinforcement learning method employed learns an optimal policy by actuating the ankle joints to assert(More)
This paper introduces an optimised method for extracting natural landmarks to improve localisation during RoboCup soccer matches. The method uses modified 1D SURF features extracted from pixels on the robot's horizon. Consistent with the original SURF algorithm, the extracted features are robust to lighting changes, scale changes, and small changes in(More)