• Corpus ID: 14508113

Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams

@article{Kompella2011IncrementalSF,
  title={Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams},
  author={Varun Raj Kompella and Matthew D. Luciw and J{\"u}rgen Schmidhuber},
  journal={ArXiv},
  year={2011},
  volume={abs/1112.2113}
}
Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a… 

Hierarchical Incremental Slow Feature Analysis

TLDR
This work focuses on the hierarchical extension of Slow Feature Analysis, which has an intriguing potential for autonomous agents that learn upon raw visual streams, but in order to realize this potential it needs to be both hierarchical and adaptive.

Analytic Manifold Learning: Unifying and Evaluating Representations for Continuous Control

TLDR
An evaluation suite that measures alignment between latent and true low-dimensional states is proposed, and a unifying mathematical formulation for learning latent relations is presented, which enables a more general, flexible and principled way of shaping the latent space.

J un 2 01 8 State Representation Learning for Control : An Overview

TLDR
This survey aims at covering the state-of-the-art on state representation learning in the most recent years by reviewing different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real).

Locally Constrained Representations in Reinforcement Learning

TLDR
Locally constrained representations are proposed, where an auxiliary loss forces the state representations to be predictable by the representations of the neighbouring states, which constrains the representations from changing too rapidly.

Modular Latent Space Transfer withAnalytic Manifold Learning

TLDR
An approach that uses simulation to help learning latent state representations without requiring a match in visual appearance or domain randomization is proposed and it is shown that this approach improves the quality of the latent space of unsupervised learners that train from non-stationary high-dimensional observations.

Acceleration of Actor-Critic Deep Reinforcement Learning for Visual Grasping in Clutter by State Representation Learning Based on Disentanglement of a Raw Input Image

TLDR
It is found that preprocessing based on the disentanglement of a raw input image is the key to effectively capturing a compact representation, which enables deep RL to learn robotic grasping skills from highly varied and diverse visual inputs.

Slow feature action prototypes effect assessment in mechanism for recognition of biological movement ventral stream

TLDR
The proposed approach is tested on the KTH human action database videos, and good performances are indicated compared to existing methods, with good interaction between dorsal and ventral processing streams.

Non-Prehensile Manipulation Learning through Self-Supervision

TLDR
A novel learning model based on neural networks is proposed in order to sample the actions of the robot to push objects to desired positions to demonstrate the efficiency of the proposed method applied to non-prehensile manipulation, such as pushing or rotating of small objects on the table.

Incremental Sparse-PCA Feature Extraction For Data Streams

iii Acknowledgement v Table of

References

SHOWING 1-10 OF 58 REFERENCES

Incremental Slow Feature Analysis

TLDR
The first online version of Slow Feature Analysis is developed, via a combination of incremental Principal Components Analysis and Minor Components Analysis, and it is shown that it indeed learns without a teacher to encode the input stream by informative slow features representing meaningful abstract environmental properties.

Slow Feature Analysis: Unsupervised Learning of Invariances

TLDR
Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal that is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decor-related features, which are ordered by their degree of invariance.

Reinforcement Learning on Slow Features of High-Dimensional Input Streams

TLDR
The hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning is supported.

Sequential Constant Size Compressors for Reinforcement Learning

TLDR
This work investigates a novel method using standard RL techniques using as input the hidden layer output of a Sequential Constant-Size Compressor, which takes the form of a sequential Recurrent Auto-Associative Memory, trained through standard back-propagation.

Learning Unambiguous Reduced Sequence Descriptions

TLDR
Experiments show that systems based on these principles can require less computation per time step and many fewer training sequences than conventional training algorithms for recurrent nets.

Learning Factorial Codes by Predictability Minimization

TLDR
An entirely local algorithm is described that has a potential for learning unique representations of extended input sequences that are potentially relevant for segmentation tasks, speeding up supervised learning, and novelty detection.

Candid Covariance-Free Incremental Principal Component Analysis

TLDR
A fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (so covariances-free).

Slow, Decorrelated Features for Pretraining Complex Cell-like Networks

TLDR
A new type of neural network activation function based on recent physiological rate models for complex cells in visual area V1 is introduced, which results in orientation-selective features, similar to the receptive fields of complex cells.

Temporal Continuity Learning for Convolutional Deep Belief Networks

TLDR
The goal of this work is to develop a computer algorithm which can replicate this sort of learning called temporal continuity learning, which uses Deep Belief Networks and entirely different heuristics to measure how ’good’ a representation is.

Unsupervised feature learning for audio classification using convolutional deep belief networks

In recent years, deep learning approaches have gained significant interest as a way of building hierarchical representations from unlabeled data. However, to our knowledge, these deep learning
...