Predictive Coding and the Slowness Principle: An Information-Theoretic Approach

@article{Creutzig2008PredictiveCA,
  title={Predictive Coding and the Slowness Principle: An Information-Theoretic Approach},
  author={Felix Creutzig and Henning Sprekeler},
  journal={Neural Computation},
  year={2008},
  volume={20},
  pages={1026-1041}
}
Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals… 

Towards a unified theory of efficient, predictive and sparse coding

TLDR
It is shown that predictive coding can lead neurons to either correlate or decorrelate their inputs, depending on presented stimuli, while (at low-noise) efficient coding always predicts decorrelation.

Toward a unified theory of efficient, predictive, and sparse coding

TLDR
A unified framework that encompasses previously proposed efficient coding models and extends to unique regimes is developed and promises a way to explain the observed diversity of sensory neural responses, as due to multiple functional goals and constraints fulfilled by different cell types and/or circuits.

Slowness Learning: Mathematical Approaches and Synaptic Mechanisms

TLDR
It is shown that spike timing-dependent plasticity can under certain conditions be interpreted as an implementation of slowness learning and lead to receptive field dynamics that can be described in terms of reaction-diffusion equations.

Efficient Temporal Coding in the Early Visual System: Existing Evidence and Future Directions

TLDR
A clear summary of the theoretical relationship between efficient coding and temporal prediction is provided, and evidence that efficient coding principles explain computations in the retina is reviewed, and the same framework is applied to computations occurring in early visuocortical areas.

Coherent Infomax as a Computational Goal for Neural Systems

TLDR
This work shows that Coherent Infomax is consistent with a particular Bayesian interpretation for the contextual guidance of learning and processing, by explicitly specifying rules for on-line learning, and by suggesting approximations by which the learning rules can be made computationally feasible within systems composed of very many local processors.

A Theoretical Basis for Emergent Pattern Discrimination in Neural Systems Through Slow Feature Extraction

TLDR
It is shown that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics.

Involving Motor Capabilities in the Formation of Sensory Space Representations

TLDR
An algorithm is proposed that reorganizes an agent's representation of sensory space by maximizing the predictability of sensory state transitions given a motor action, and it is found that the optimization algorithm generates compact, isotropic representations of space, comparable to hippocampal place fields.

Compressed Predictive Information Coding

TLDR
This work developed a novel information-theoretic framework, Compressed Predictive Information Coding (CPIC), to extract useful representations from dynamic data and finds that introducing stochasticity in the encoder robustly contributes to better representation.

Spatio-Temporally Efficient Coding Assigns Functions to Hierarchical Structures of the Visual System

TLDR
It is demonstrated that spatio-temporally efficient coding predicts well-known features of neural responses in the visual system such as that deviation in neural responses to unfamiliar inputs and a bias in preferred orientations.

Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception

TLDR
It is shown how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors.

References

SHOWING 1-10 OF 40 REFERENCES

Neural coding and decoding: communication channels and quantization

TLDR
It is shown that a coding scheme is an almost bijective relation between equivalence classes of stimulus/response pairs, which allows a quantitative determination of the type of information encoded in neural activity patterns and identification of the code with which that information is represented.

Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects.

TLDR
Results suggest that rather than being exclusively feedforward phenomena, nonclassical surround effects in the visual cortex may also result from cortico-cortical feedback as a consequence of the visual system using an efficient hierarchical strategy for encoding natural images.

Predictive coding: a fresh view of inhibition in the retina

TLDR
Comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it.

Redundancy Reduction and Independent Component Analysis: Conditions on Cumulants and Adaptive Approaches

TLDR
This article gives simple conditions on the network output that guarantee that source separation has been obtained and shows how the resulting updating rules are related to the BCM theory of synaptic plasticity.

Unifying perception and curiosity

TLDR
This dissertation proposes a novel principle which it is hoped will allow not only for a greater understanding of the brain, but also serve as a principled basis for the design of future algorithms to solve a broad range of problems in artificial intelligence.

Slow feature analysis yields a rich repertoire of complex cell properties.

In this study we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals

Slow Feature Analysis: Unsupervised Learning of Invariances

TLDR
Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal that is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decor-related features, which are ordered by their degree of invariance.

Dynamic predictive coding by the retina

TLDR
It is shown that a network model with plastic synapses can account for the large variety of observed adaptations in retinal ganglion cells, and that when this happens, the retina adjusts its processing dynamically.

A learning rule for extracting spatio-temporal invariances

TLDR
It is demonstrated that a model neuron which adapts to make its output vary smoothly over time can learn to extract invariances implicit in its input, using a linear combination of Hebbian and anti-Hebbian synaptic changes.

Some informational aspects of visual perception.

TLDR
Special types of lawfulness which may exist in space at a fixed time, and which seem particularly relevant to processes of visual perception are focused on.