Learn More
Probably the hardest test for a theory of brain function is the explanation of language processing in the human brain, in particular the interplay of syntax and semantics. Clearly such an explanation can only be very speculative, because there are essentially no animal models and it is hard to study detailed neural processing in humans. The approach(More)
Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single(More)
Using associative memories and sparse distributed representations we have developed a system that can learn to associate words with objects, properties like colors, and actions. This system is used in a robotics context to enable a robot to respond to spoken commands like " bot show plum " or " bot put apple to yellow cup ". The scenario for this is a robot(More)
The brain representations of words and their referent actions and objects appear to be strongly coupled neuronal assemblies distributed over several cortical areas. In this work we describe the implementation of a cell assembly-based model of several visual, language, planning, and motor areas to enable a robot to understand and react to simple spoken(More)
We have implemented a neurobiologically plausible system on a robot that integrates visual attention, object recognition, language and action processing using a coherent cortex-like architecture based on neural associative memories. This system enables the robot to respond to spoken commands like " bot show plum " or " bot put apple to yellow cup ". The(More)
We have implemented a system that can understand spoken command sentences like "Bot lift green apple" using hidden Markov models (HMMs) and neural associative memories. After speaking a command sentence into a microphone, the system processes it in three stages: As first step, the auditory input is transformed into a convenient subsymbolic representation(More)
We have implemented a neurobiologically plausible system on a robot that integrates object recognition, visual attention, language and action processing using a coherent cortex-like architecture based on neural associative memories. This system enables the robot to respond to spoken commands like " bot show plum " or " bot put apple to yellow cup ". The(More)
In this paper, the problem of safe exploration in the active learning context is considered. Safe exploration is especially important for data sampling from technical and industrial systems, e.g. combustion engines and gas turbines, where critical and unsafe measurements need to be avoided. The objective is to learn data-based regression models from such(More)