Derivatives and inverse of cascaded linear+nonlinear neural models

@article{MartinezGarcia2018DerivativesAI,
  title={Derivatives and inverse of cascaded linear+nonlinear neural models},
  author={Marina Martinez-Garcia and Praveen Cyriac and Thomas Batard and Marcelo Bertalm{\'i}o and Jes{\'u}s Malo},
  journal={PLoS ONE},
  year={2018},
  volume={13}
}
In vision science, cascades of Linear+Nonlinear transforms are very successful in modeling a number of perceptual experiences. However, the conventional literature is usually too focused on only describing the forward input-output transform. Instead, in this work we present the mathematics of such cascades beyond the forward transform, namely the Jacobian matrices and the inverse. The fundamental reason for this analytical treatment is that it offers useful analytical insight into the… 
Canonical Retina-to-Cortex Vision Model Ready for Automatic Differentiation
TLDR
A Python implementation of a standard multi-layer model for the retina-to-V1 pathway and the proposed default parameters reproduce image distortion psychophysics and are ready to be optimized with automatic differentiation tools for alternative goals.
Spatio-chromatic information available from different neural layers via Gaussianization
  • J. Malo
  • Computer Science
    Journal of mathematical neuroscience
  • 2020
TLDR
An empirical estimate of the information transmitted by the system based on a recent Gaussianization technique is proposed and the total correlation measured is consistent with predictions based on the analytical Jacobian of a standard spatio-chromatic model of the retina–cortex pathway.
Visual Information flow in Wilson-Cowan networks.
TLDR
The theoretical and the empirical results show that although this cascade of layers was not optimized for statistical independence in any way, the redundancy between the responses gets substantially reduced along the pathway, and suggest that neural field models could be an option in image coding to perform image compression.
A Connection Between Image Processing and Artificial Neural Networks Layers Through a Geometric Model of Visual Perception
TLDR
This paper introduces a mathematical model inspired by visual perception from which neural network layers and image processing models for color correction can be derived and shows the accuracy of the model for deep learning by testing it on the MNIST dataset for digit classification.
Channel Capacity in Psychovisual Deep-Nets: Gaussianization Versus Kozachenko-Leonenko
TLDR
This work quantifies how neural networks designed from biology using no statistical training have a remarkable performance in information theoretic terms and proposes the use of two empirical estimators of capacity: the classical Kozachenko-Lonenko estimator and a recent estimator based on Gaussianization.
Contrast sensitivity functions in autoencoders
TLDR
It is shown that a very popular type of convolutional neural networks, called autoencoders, may develop human-like CSFs when trained to perform some basic low-level vision tasks, but not others (like chromatic) adaptation or pure reconstruction after simple bottlenecks.
Cortical-Inspired Wilson–Cowan-Type Equations for Orientation-Dependent Contrast Perception Modelling
TLDR
The evolution model proposed in Bertalmío is considered, and the ability of the model to reproduce orientation-dependent phenomena such as grating induction and a modified version of the Poggendorff illusion is reported.
Appropriate kernels for Divisive Normalization explained by Wilson-Cowan equations
TLDR
Lower-level justification for the specific empirical modification required in the Gaussian kernel of Divisive Normalization is provided and symmetric Gaussian inhibitory relations between wavelet-like sensors wired in the lower-level Wilson-Cowan model lead to the appropriate non-symmetric kernel that has to be empirically included in Divisive normalized to explain a wider range of phenomena.
A cortical-inspired model for orientation-dependent contrast perception: a link with Wilson-Cowan equations
TLDR
A differential model describing neuro-physiological contrast perception phenomena induced by surrounding orientations, analogous to the one used in [3,2,14] to describe assimilation and contrast phenomena, the main novelty being its explicit dependence on local image orientation.
Convolutional Neural Networks Deceived by Visual Illusions
TLDR
It is shown that CNNs trained for image denoising, image deblurring, and computational color constancy are able to replicate the human response to visual illusions, and that the extent of this replication varies with respect to variation in architecture and spatial pattern size.
...
...

References

SHOWING 1-10 OF 138 REFERENCES
A Two-Stage Cascade Model of BOLD Responses in Human Visual Cortex
TLDR
A model that accepts an arbitrary band-pass grayscale image as input and predicts blood oxygenation level dependent (BOLD) responses in early visual cortex as output is developed, providing insight into how stimuli are encoded and transformed in successive stages of visual processing.
The impact on midlevel vision of statistically optimal divisive normalization in V1.
TLDR
This work simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities, and addressed how V2 receptive fields pool the responses of V1 model units with different tuning.
A Convolutional Subunit Model for Neuronal Responses in Macaque V1
TLDR
A new subunit model for neurons in primary visual cortex is presented that significantly outperforms three alternative models in terms of cross-validated accuracy and efficiency, and provides a robust and biologically plausible account of the receptive field structure in these neurons across the full spectrum of response properties.
Geometrical and statistical properties of vision models obtained via maximum differentiation
TLDR
An example of a distorted image that is optimized so as to minimize the perceptual error over receptive fields that scale with eccentricity is generated, demonstrating that the errors are barely visible despite a substantial MSE relative to the original image.
From image processing to computational neuroscience: a neural model based on histogram equalization
TLDR
A neural model derived from an image processing technique for histogram equalization is proposed that is able to predict lightness induction phenomena, and improves the efficiency of the representation by flattening both the histogram and the power spectrum of the image signal.
V1 non-linear properties emerge from local-to-global non-linear ICA
TLDR
This work shows that by using an unconstrained approach, masking-like behavior emerges directly from natural images, an additional indication that Barlow's efficient encoding hypothesis may explain not only the shape of receptive fields of V1 sensors but also their non-linear behavior.
The spatial structure of a nonlinear receptive field
TLDR
A mechanistic model based on measurements of the physiological properties and connectivity of only the primary excitatory circuitry of the retina successfully predicts ganglion-cell responses to a variety of spatial patterns and thus provides a direct correspondence between circuit connectivity and retinal output.
Model Constrained by Visual Hierarchy Improves Prediction of Neural Responses to Natural Scenes
TLDR
A model-based analysis incorporating knowledge of the feed-forward visual hierarchy offers an improved functional characterization of V1 neurons, and provides a framework for studying the relationship between connectivity and function in visual cortical areas.
Visual aftereffects and sensory nonlinearities from a single statistical framework
TLDR
This study shows that both the response changes that lead to aftereffects and the nonlinear behavior can be simultaneously derived from a single statistical framework: the Sequential Principal Curves Analysis (SPCA).
...
...