Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization

  title={Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization},
  author={Sushrut Thorat and Giacomo Aldegheri and T. Kietzmann},
Recurrent neural networks (RNNs) have been shown to perform better than feedforward architectures in visual object categorization tasks, especially in challenging conditions such as cluttered images. However, little is known about the exact computational role of recurrent information flow in these conditions. Here we test RNNs trained for object categorization on the hypothesis that recurrence iteratively aids object categorization via the communication of category-orthogonal auxiliary… 

Figures from this paper

Bio-inspired neural networks implement different recurrent visual processing strategies than task-trained ones do

Four different kinds of recurrence are added to a feedforward convolutional neural network and all forms capable of increasing the ability of the network to classify noisy digit images are found, leading to an expected increase in classification performance.

Invariant neural subspaces maintained by feedback modulation

It is shown that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations that maintain an invariant neural subspace in spite of contextual variations.

Degrees of algorithmic equivalence between the brain and its DNN models

The neuroconnectionist research programme

Neuroconnectionism is presented as a cohesive large-scale research programme centered around ANNs as a computational language for expressing falsifiable theories about brain computation, and the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses are described.



Recurrent Connections Aid Occluded Object Recognition by Discounting Occluders

It is shown that the recurrent connections tend to move the network's representation of an occluded digit towards its un-occluded version, suggesting that both the brain and artificial neural networks can exploit recurrent connectivity to aid Occluded object recognition.

Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition

It is found that recurrent neural networks outperform feedforward control models at recognising objects, both in the absence of occlusion and in all occlusions conditions, and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

Task-Driven Convolutional Recurrent Models of the Visual System

It is found that standard forms of recurrence do not perform well within deep CNNs on the ImageNet task, and novel cells that incorporated two structural features were able to boost task accuracy substantially, suggesting a role for the brain's recurrent connections in performing difficult visual behaviors.

Explicit information for category-orthogonal object properties increases along the ventral stream

For complex naturalistic stimuli, the inferior temporal (IT) population encodes all measured category-orthogonal object properties, including those properties often considered to be low-level features more explicitly than earlier ventral stream areas.

Going in circles is the way forward: the role of recurrence in visual inference

Recurrent neural networks can explain flexible trading of speed and accuracy in biological vision

It is reported that recurrent processing can improve recognition performance compared to similarly complex feedforward networks and also enabled models to behave more flexibly and trade off speed for accuracy.

Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior

Using model- and primate behavior-driven image selection with large-scale electrophysiology in monkeys performing core recognition tasks, Kar et al. provide evidence that automatically engaged recurrent circuits are critical for rapid object identification.

Recurrence is required to capture the representational dynamics of the human visual system

It is established that recurrent models are required to understand information processing in the human ventral stream using time-resolved brain imaging and deep learning.

Individual differences among deep neural network models

Individual differences among DNN instances that arise from varying only the random initialization of the network weights are investigated, demonstrating that this minimal change in initial conditions prior to training leads to substantial differences in intermediate and higher-level network representations, despite achieving indistinguishable network-level classification performance.

Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex

A generalization of both RNN and ResNet architectures and the conjecture that a class of moderately deep RNNs is a biologically-plausible model of the ventral stream in visual cortex are proposed.