Learn More
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational(More)
We propose a new indirect encoding scheme for neural networks in which the weight matrices are represented in the frequency domain by sets Fourier coefficients. This scheme exploits spatial regularities in the matrix to reduce the dimensionality of the representation by ignoring high-frequency coefficients, as is done in lossy image compression. We compare(More)
The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is(More)
— In this paper we describe simulation of autonomous robots controlled by recurrent neural networks, which are evolved through indirect encoding using HyperNEAT algorithm. The robots utilize 180 degree wide sensor array. Thanks to the scalability of the neural network generated by HyperNEAT, the sensor array can have various resolution. This would allow to(More)
Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the(More)
Many sequential processing tasks require complex nonlinear transition functions from one step to the next. However, recurrent neural networks with " deep" transition functions remain difficult to train, even when using Long Short-Term Memory (LSTM) networks. We introduce a novel theoretical analysis of recurrent networks based on Geršgorin's circle theorem(More)
In this paper we present application of genetic programming (GP) [1] to evolution of indirect encoding of neural network weights. We compare usage of original HyperNEAT algorithm with our implementation , in which we replaced the underlying NEAT with genetic programming. The algorithm was named HyperGP. The evolved neural networks were used as controllers(More)