Peter Englert

Learn More
— Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics. Training individual policies for every single potential task is often impractical, especially for continuous task variations, requiring more principled approaches to share and transfer knowledge among similar(More)
Efficient skill acquisition is crucial for creating versatile robots. One intuitive way to teach a robot new tricks is to demonstrate a task and enable the robot to imitate the demonstrated behavior. This approach is known as imitation learning. Classical methods of imitation learning, such as inverse reinforcement learning or behavioral cloning, suffer(More)
— One of the most elegant ways of teaching new skills to robots is to provide demonstrations of a task and let the robot imitate this behavior. Such imitation learning is a non-trivial task: Different anatomies of robot and teacher, and reduced robustness towards changes in the control task are two major difficulties in imitation learning. We present an(More)
— Efficient manipulation requires contact to reduce uncertainty. The manipulation literature refers to this as fun-neling: a methodology for increasing reliability and robustness by leveraging haptic feedback and control of environmental interaction. However, there is a fundamental gap between traditional approaches to trajectory optimization and this(More)
Modeling policies in reproducing kernel Hilbert space (RKHS) renders policy gradient reinforcement learning algorithms non-parametric. As a result , the policies become very flexible and have a rich representational potential without a pre-defined set of features. However, their performances might be either non-covariant under re-parameterization of the(More)
Inverse Optimal Control (IOC) assumes that demonstrations are the solution to an optimal control problem with unknown underlying costs, and extracts parameters of these underlying costs. We propose the framework of Inverse KKT, which assumes that the demonstrations fulfill the Karush-Kuhn-Tucker conditions of an unknown underlying constrained optimization(More)
— Sparse Gaussian process (GP) models provide an efficient way to perform regression on large data sets. The key idea is to select a representative subset of the available training data, which induces the sparse GP model approximation. In the past, a variety of selection criteria for GP approximation have been proposed, but they either lack accuracy or(More)
— An essential aspect for making robots succeed in real-world environments is to give them the ability to robustly perform motions in continuously changing situations. Classical motion planning methods usually create plans for static environments. The direct execution of such plans in dynamic environments often becomes problematic. We present an approach(More)