• Corpus ID: 235624249

Towards Biologically Plausible Convolutional Networks

@inproceedings{Pogodin2021TowardsBP,
  title={Towards Biologically Plausible Convolutional Networks},
  author={Roman Pogodin and Yash Mehta and Timothy P. Lillicrap and Peter E. Latham},
  booktitle={NeurIPS},
  year={2021}
}
Convolutional networks are ubiquitous in deep learning. They are particularly useful for images, as they reduce the number of parameters, reduce training time, and increase accuracy. However, as a model of the brain they are seriously problematic, since they require weight sharing - something real neurons simply cannot do. Consequently, while neurons in the brain can be locally connected (one of the features of convolutional networks), they cannot be convolutional. Locally connected but non… 

Locally connected networks as ventral stream models

TLDR
The more weight sharing networks have, the better they perform on both ImageNet and BrainScore, and locally connected networks outperform their convolutional counterparts on purely neural data, but not on behavioral responses.

MouseNet: A biologically constrained convolutional neural network model for the mouse visual cortex

TLDR
This work introduces a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex, and provides evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point.

Bag of Tricks for Training Brain-Like Deep Neural Networks

TLDR
The proposed pipeline combines a customized version of CutMix, heavy use of image augmentations, adversarial robust training, fixing the train-test resolution discrepancy, and weight averaging to find a training procedure that maximizes an ANNs average score in the Brain-Score benchmark.

BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks

TLDR
This work proposes a reward-modulated locally connected spiking neural network, BioLCNet, for visual learning tasks and assesses the robustness of the rewarding mechanism to varying target responses in a classical conditioning experiment.

Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules

TLDR
This analysis is the first to identify the reason for this generalization gap between artificial and biologically-plausible learning rules, which can help guide future investigations into how the brain learns solutions that generalize.

SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks

TLDR
The authors' Hebbian algorithm, SoftHebb, minimizes cross-entropy without having access to it, and outperforms the more frequently used, hard-WTA-based method, and even outperforms supervised end-to-end backpropagation, under certain conditions.

Learning cortical representations through perturbed and adversarial dreaming

TLDR
Generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations.

C ONTINUAL L EARNING WITH D EEP A RTIFICIAL N EURONS

  • Computer Science
  • 2022
TLDR
This work introduces Deep Artificial Neurons (DANs)—small neural networks with shared, learnable parameters embedded within a larger network that allow a single network to update its synapses over time with minimal forgetting.

A Hebbian Approach to Non-Spatial Prelinguistic Reasoning

TLDR
Ring Model B is presented, which is capable of associating visual with auditory stimulus, performing sequential predictions, and predicting reward from experience, and is considered to be a first step towards the formulation of more general models of prelinguistic reasoning.

Credit Assignment Through Broadcasting a Global Error Vector

TLDR
The theoretical and empirical results point to a surprisingly powerful role for a global learning signal in training DNNs, and it is proved that these weight updates are matched in sign to the gradient, enabling accurate credit assignment.

References

SHOWING 1-10 OF 66 REFERENCES

Feedback alignment in deep convolutional networks

TLDR
It is demonstrated that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation.

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures

TLDR
Results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance are presented and implementation details help establish baselines for biologically motivated deep learning schemes going forward.

Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

TLDR
A method that maps a CNN to its equivalent FCN (denoted as eFCN) is introduced, which enables the comparison of CNN and FCN training dynamics directly in the FCN space and offers interesting insights into the persistence of architectural bias under stochastic gradient dynamics.

Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation

TLDR
The results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT.

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

TLDR
This work presents a family of learning rules motivated by the information bottleneck principle, which solve all three implausibility issues of backpropagation and need divisive normalization - a known feature of biological networks.

Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future

TLDR
This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight.

Deep convolutional models improve predictions of macaque V1 responses to natural images

TLDR
Multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories.

Learning Multiple Layers of Features from Tiny Images

TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Towards Learning Convolutions from Scratch

TLDR
This work proposes $\beta$-LASSO, a simple variant of LASSO algorithm that, when applied on fully-connected networks for image classification tasks, learns architectures with local connections and achieves state-of-the-art accuracies for training fully- connected nets.
...