• Corpus ID: 14508113

Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams

@article{Kompella2011IncrementalSF,
  title={Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams},
  author={Varun Raj Kompella and Matthew D. Luciw and J{\"u}rgen Schmidhuber},
  journal={ArXiv},
  year={2011},
  volume={abs/1112.2113}
}
Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a… 

Hierarchical Incremental Slow Feature Analysis

This work focuses on the hierarchical extension of Slow Feature Analysis, which has an intriguing potential for autonomous agents that learn upon raw visual streams, but in order to realize this potential it needs to be both hierarchical and adaptive.

Analytic Manifold Learning: Unifying and Evaluating Representations for Continuous Control

An evaluation suite that measures alignment between latent and true low-dimensional states is proposed, and a unifying mathematical formulation for learning latent relations is presented, which enables a more general, flexible and principled way of shaping the latent space.

J un 2 01 8 State Representation Learning for Control : An Overview

This survey aims at covering the state-of-the-art on state representation learning in the most recent years by reviewing different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real).

Locally Constrained Representations in Reinforcement Learning

Locally constrained representations are proposed, where an auxiliary loss forces the state representations to be predictable by the representations of the neighbouring states, which constrains the representations from changing too rapidly.

Modular Latent Space Transfer withAnalytic Manifold Learning

An approach that uses simulation to help learning latent state representations without requiring a match in visual appearance or domain randomization is proposed and it is shown that this approach improves the quality of the latent space of unsupervised learners that train from non-stationary high-dimensional observations.

Acceleration of Actor-Critic Deep Reinforcement Learning for Visual Grasping by State Representation Learning Based on a Preprocessed Input Image

It is found that the proposed preprocessed input image is the key to capturing effectively a compact representation that enables deep RL to learn robotic grasping skills from highly varied and diverse visual inputs.

Slow feature action prototypes effect assessment in mechanism for recognition of biological movement ventral stream

The proposed approach is tested on the KTH human action database videos, and good performances are indicated compared to existing methods, with good interaction between dorsal and ventral processing streams.

Non-Prehensile Manipulation Learning through Self-Supervision

A novel learning model based on neural networks is proposed in order to sample the actions of the robot to push objects to desired positions to demonstrate the efficiency of the proposed method applied to non-prehensile manipulation, such as pushing or rotating of small objects on the table.

Improving sensory representations using episodic memory

The MTL can, even if it has only a purely mnemonic function, influence perceptual discrimination indirectly, and it is demonstrated that the performance in visual discrimination tasks is superior when episodic memory is present.

References

SHOWING 1-10 OF 58 REFERENCES

Incremental Slow Feature Analysis

The first online version of Slow Feature Analysis is developed, via a combination of incremental Principal Components Analysis and Minor Components Analysis, and it is shown that it indeed learns without a teacher to encode the input stream by informative slow features representing meaningful abstract environmental properties.

Slow Feature Analysis: Unsupervised Learning of Invariances

Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal that is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decor-related features, which are ordered by their degree of invariance.

Reinforcement Learning on Slow Features of High-Dimensional Input Streams

The hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning is supported.

Sequential Constant Size Compressors for Reinforcement Learning

This work investigates a novel method using standard RL techniques using as input the hidden layer output of a Sequential Constant-Size Compressor, which takes the form of a sequential Recurrent Auto-Associative Memory, trained through standard back-propagation.

Learning Unambiguous Reduced Sequence Descriptions

Experiments show that systems based on these principles can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.

Learning Factorial Codes by Predictability Minimization

An entirely local algorithm is described that has a potential for learning unique representations of extended input sequences that are potentially relevant for segmentation tasks, speeding up supervised learning, and novelty detection.

Candid Covariance-Free Incremental Principal Component Analysis

A fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (so covariances-free).

Learning Complex, Extended Sequences Using the Principle of History Compression

A simple principle for reducing the descriptions of event sequences without loss of information is introduced and this insight leads to the construction of neural architectures that learn to divide and conquer by recursively decomposing sequences.

Slow, Decorrelated Features for Pretraining Complex Cell-like Networks

A new type of neural network activation function based on recent physiological rate models for complex cells in visual area V1 is introduced, which results in orientation-selective features, similar to the receptive fields of complex cells.

Temporal Continuity Learning for Convolutional Deep Belief Networks

The goal of this work is to develop a computer algorithm which can replicate this sort of learning called temporal continuity learning, which uses Deep Belief Networks and entirely different heuristics to measure how ’good’ a representation is.
...