• Corpus ID: 235624249

Towards Biologically Plausible Convolutional Networks

@inproceedings{Pogodin2021TowardsBP,
  title={Towards Biologically Plausible Convolutional Networks},
  author={Roman Pogodin and Yash Mehta and Timothy P. Lillicrap and Peter E. Latham},
  booktitle={Neural Information Processing Systems},
  year={2021}
}
Convolutional networks are ubiquitous in deep learning. They are particularly useful for images, as they reduce the number of parameters, reduce training time, and increase accuracy. However, as a model of the brain they are seriously problematic, since they require weight sharing - something real neurons simply cannot do. Consequently, while neurons in the brain can be locally connected (one of the features of convolutional networks), they cannot be convolutional. Locally connected but non… 

Locally connected networks as ventral stream models

The more weight sharing networks have, the better they perform on both ImageNet and BrainScore, and locally connected networks outperform their convolutional counterparts on purely neural data, but not on behavioral responses.

MouseNet: A biologically constrained convolutional neural network model for the mouse visual cortex

This work introduces a novel framework for building a biologically constrained convolutional neural network model of the mouse visual cortex, and provides evidence that optimizing for task performance does not improve similarity to the corresponding biological system beyond a certain point.

Hebbian Deep Learning Without Feedback

SoftHebb shows with a radically different approach from BP that Deep Learning over few layers may be plausible in the brain and increases the accuracy of bio-plausible machine learning.

BioLCNet: Reward-modulated Locally Connected Spiking Neural Networks

This work proposes a reward-modulated locally connected spiking neural network, BioLCNet, for visual learning tasks and assesses the robustness of the rewarding mechanism to varying target responses in a classical conditioning experiment.

Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules

It is demonstrated that state-of-the-art biologically-plausible learning rules for training RNNs exhibit worse and more variable generalization performance compared to their machine learning counterparts that follow the true gradient more closely, and a theorem is presented explaining this phenomenon.

SoftHebb: Bayesian inference in unsupervised Hebbian soft winner-take-all networks

The authors' Hebbian algorithm, SoftHebb, minimizes cross-entropy without having access to it, and outperforms the more frequently used, hard-WTA-based method, and even outperforms supervised end-to-end backpropagation, under certain conditions.

Learning cortical representations through perturbed and adversarial dreaming

Generating new, virtual sensory inputs via adversarial dreaming during REM sleep is essential for extracting semantic concepts, while replaying episodic memories via perturbed dreaming during NREM sleep improves the robustness of latent representations.

A brain-inspired algorithm for training highly sparse neural networks

Sparse neural networks attract increasing interest as they exhibit comparable performance to their dense counterparts while being computationally efficient. Pruning the dense neural networks is among

C ONTINUAL L EARNING WITH D EEP A RTIFICIAL N EURONS

  • Computer Science
  • 2022
This work introduces Deep Artificial Neurons (DANs)—small neural networks with shared, learnable parameters embedded within a larger network that allow a single network to update its synapses over time with minimal forgetting.

A Hebbian Approach to Non-Spatial Prelinguistic Reasoning

Ring Model B is presented, which is capable of associating visual with auditory stimulus, performing sequential predictions, and predicting reward from experience, and is considered to be a first step towards the formulation of more general models of prelinguistic reasoning.

References

SHOWING 1-10 OF 66 REFERENCES

Feedback alignment in deep convolutional networks

It is demonstrated that a modification of the feedback alignment method that enforces a weaker form of weight symmetry, one that requires agreement of weight sign but not magnitude, can achieve performance competitive with backpropagation.

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures

Results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance are presented and implementation details help establish baselines for biologically motivated deep learning schemes going forward.

Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias

A method that maps a CNN to its equivalent FCN (denoted as eFCN) is introduced, which enables the comparison of CNN and FCN training dynamics directly in the FCN space and offers interesting insights into the persistence of architectural bias under stochastic gradient dynamics.

Deep Supervised, but Not Unsupervised, Models May Explain IT Cortical Representation

The results suggest that explaining IT requires computational features trained through supervised learning to emphasize the behaviorally important categorical divisions prominently reflected in IT.

Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks

This work presents a family of learning rules motivated by the information bottleneck principle, which solve all three implausibility issues of backpropagation and need divisive normalization - a known feature of biological networks.

Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future

This review highlights what, in the context of CNNs, it means to be a good model in computational neuroscience and the various ways models can provide insight.

Deep convolutional models improve predictions of macaque V1 responses to natural images

Multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories.

Learning Multiple Layers of Features from Tiny Images

It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.

Revisiting Spatial Invariance with Low-Rank Local Connectivity

In experiments with small convolutional networks, it is found that relaxing spatial invariance improves classification accuracy over both convolution and locally connected layers across MNIST, CIFAR-10, and CelebA datasets, thus suggesting that spatial invariances may be an overly restrictive prior.
...