Klaus Greff

Learn More
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational(More)
Background subtraction is an important preprocessing step in many modern Computer Vision systems. Much work has been done especially in the field of color image based foreground segmentation. But the task is not an easy one so, state of the art background subtraction algorithms are complex both in programming logic and in run time. Depth cameras might offer(More)
The study of aerosol composition for air quality research involves the analysis of high-dimensional single particle mass spectrometry data. We describe, apply, and evaluate a novel interactive visual framework for dimensionality reduction of such data. Our framework is based on non-negative matrix factorization with specifically defined regularization terms(More)
Disentangled distributed representations of data are desirable for machine learning , since they are more expressive and can generalize from fewer examples. However , for complex data, the distributed representations of multiple objects present in the same input can interfere and lead to ambiguities, which is commonly referred to as the binding problem. We(More)
Slime mould of Physarum polycephalum is a large cell exhibiting rich spatial non-linear electrical characteristics. We exploit the electrical properties of the slime mould to implement logic gates using a flexible hardware platform designed for investigating the electrical properties of a substrate (Mecobo). We apply arbitrary electrical signals to(More)
We present a framework for efficient perceptual inference that explicitly reasons about the segmentation of its inputs and features. Rather than being trained for any specific segmentation, our framework learns the grouping process in an unsuper-vised manner or alongside any supervised task. By enriching the representations of a neural network, we enable it(More)
Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the(More)
  • 1