René Felix Reinhart

Learn More
We present an efficient online learning scheme for non-negative sparse coding in autoencoder neural networks. It comprises a novel synaptic decay rule that ensures non-negative weights in combination with an intrinsic self-adaptation rule that optimizes sparseness of the non-negative encoding. We show that non-negativity constrains the space of solutions(More)
The data-driven approximation of vector fields that encode dynamical systems is a persistently hard task in machine learning. If data is sparse and given in form of velocities derived from few trajectories only, state-space regions exists, where no information on the vector field and its induced dynamics is available. Generalization towards such regions is(More)
—We introduce a novel recurrent neural network controller that learns and maintains multiple solutions of the inverse kinematics. Redundancies are resolved dynamically by means of multi-stable attractor dynamics. The associative network comprises a combined forward and inverse model of the robot's kinematics and enables flexible selection of control spaces(More)
— We introduce a novel control framework based on a recurrent neural network for reaching movement generation. The network first learns forward and inverse kinematics, i.e. to associate end effector coordinates with joint angles, by means of attractor states. Modulating the attractor states with the desired target input allows generalization of the learned(More)
We implement completely data driven and efficient online learning from temporally correlated data in a reservoir network setup. We show that attractor states rather than transients are used for computation when learning inverse kinematics for the redundant robot arm PA-10. Our findings shade also light on the role of output feedback.
—The paper presents a modular architecture for bi-manual skill acquisition from kinesthetic teaching. Skills are learned and embedded over several representational levels comprising a compact movement representation by means of movement primitives, a task space description of the bi-manual tool constraint, and the particular redundancy resolution of the(More)
We present a novel learning scheme to imprint stable vector fields into Extreme Learning Machines (ELMs). The networks represent movements, where asymptotic stability is incorporated through constraints derived from Lyapunov stability theory. We show that our approach successfully performs stable and smooth point-to-point movements learned from human(More)
We shed light on the key ingredients of reservoir computing and analyze the contribution of the network dynamics to the spatial encoding of inputs. Therefore, we introduce attractor-based reservoir networks for processing of static patterns and compare their performance and encoding capabilities with a related feedforward approach. We show that the network(More)
Output feedback is crucial for autonomous and parameterized pattern generation with reservoir networks. Read-out learning affects the output feedback loop and can lead to error amplification. Regularization is therefore important for both, generalization and reduction of error amplification. We show that regularization of the reservoir and the read-out(More)