Rico Jonschkowski

Learn More
—State representations critically affect the effectiveness of learning in robots. In this paper, we propose a robotics-specific approach to learning such state representations. Robots accomplish tasks by interacting with the physical world. Physics in turn imposes structure on both the changes in the world and on the way robots can effect these changes.(More)
The success of reinforcement learning in robotic tasks is highly dependent on the state representation – a mapping from high dimensional sensory observations of the robot to states that can be used for reinforcement learning. Even though many methods have been proposed to learn state representations, it remains an important open problem. Identifying the(More)
—We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic system building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs.(More)
—We describe the winning entry to the Amazon Picking Challenge. From the experience of building this system and competing in the Amazon Picking Challenge, we derive several conclusions: 1) We suggest to characterize robotic systems building along four key aspects, each of them spanning a spectrum of solutions—modularity vs. integration, generality vs.(More)
In classical reinforcement learning, planning is done at the level of atomic actions, which is highly laborious for complex tasks. By using temporal abstraction, an agent can construct plans more eciently through considering dierent levels of detail. This thesis investigates new approaches to automatically discover and represent temporal abstractions. Two(More)
  • 1