• Corpus ID: 238226713

Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks

  title={Biologically Plausible Training Mechanisms for Self-Supervised Learning in Deep Networks},
  author={Mufeng Tang and Yibo Yang and Yali Amit},
  • Mufeng Tang, Yibo Yang, Y. Amit
  • Published 30 September 2021
  • Computer Science, Biology
  • ArXiv
We develop biologically plausible training mechanisms for self-supervised learning (SSL) in deep networks. SSL, with a contrastive loss, is more natural as it does not require labelled data and its robustness to perturbations yields more adaptable embeddings. Moreover the perturbation of data required to create positive pairs for SSL is easily produced in a natural environment by observing objects in motion and with variable lighting over time. We propose a contrastive hinge based loss whose… 

Figures and Tables from this paper


Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures
Results on scaling up biologically motivated models of deep learning on datasets which need deep networks with appropriate architectures to achieve good performance are presented and implementation details help establish baselines for biologically motivated deep learning schemes going forward.
Training Neural Networks with Local Error Signals
It is demonstrated, for the first time, that layer-wise training can approach the state-of-the-art on a variety of image datasets and a completely backprop free variant outperforms previously reported results among methods aiming for higher biological plausibility.
Exploring Simple Siamese Representation Learning
  • Xinlei Chen, Kaiming He
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.
Greedy Layer-Wise Training of Deep Networks
These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Decoupled Neural Interfaces using Synthetic Gradients
It is demonstrated that in addition to predicting gradients, the same framework can be used to predict inputs, resulting in models which are decoupled in both the forward and backwards pass -- amounting to independent networks which co-learn such that they can be composed into a single functioning corporation.
Deep Learning With Asymmetric Connections and Hebbian Updates
  • Y. Amit
  • Computer Science, Medicine
    Front. Comput. Neurosci.
  • 2019
It is shown that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets, and theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.
Deep Learning Models of the Retinal Response to Natural Scenes
It is demonstrated that deep convolutional neural networks not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.
Learning Multiple Layers of Features from Tiny Images
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Unsupervised Neural Network Models of the Ventral Visual Stream
It is found that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods.
Greedy Layerwise Learning Can Scale to ImageNet
This work uses 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks, and obtains an 11-layer network that exceeds several members of the VGG model family on ImageNet, and can train a VGG-11 model to the same accuracy as end-to-end learning.