Symmetry-Based Representations for Artificial and Biological General Intelligence

@article{Higgins2022SymmetryBasedRF,
  title={Symmetry-Based Representations for Artificial and Biological General Intelligence},
  author={Irina Higgins and S{\'e}bastien Racani{\`e}re and Danilo Jimenez Rezende},
  journal={Frontiers in Computational Neuroscience},
  year={2022},
  volume={16}
}
Biological intelligence is remarkable in its ability to produce complex behavior in many diverse situations through data efficient, generalizable, and transferable skill acquisition. It is believed that learning “good” sensory representations is important for enabling this, however there is little agreement as to what a good representation should look like. In this review article we are going to argue that symmetry transformations are a fundamental principle that can guide our search for what… 

Figures from this paper

High-performing neural network models of visual cortex benefit from high latent dimensionality
TLDR
Surprisingly, a strong trend is found in the opposite direction—neural networks with high-dimensional image manifolds tend to have better generalization performance when predicting cortical responses to held-out stimuli in both monkey electrophysiology and human fMRI data.
Spatio-temporally separable non-linear latent factor learning: an application to somatomotor cortex fMRI data
TLDR
This work proposes a new framework inspired by latent factor analysis and applies it to functional whole-brain data from the human somatomotor cortex and shows that it captures task effects better than the current gold standard of source signal separation, independent component analysis (ICA).
Abstract representations emerge naturally in neural networks trained to perform multiple tasks
TLDR
It is demonstrated that the learning of multiple tasks causes abstract representations to emerge, using both supervised and reinforcement learning, and it is shown that these abstract representations enable few-sample learning and reliable generalization on novel tasks.

References

SHOWING 1-10 OF 221 REFERENCES
Performance-optimized hierarchical models predict neural responses in higher visual cortex
TLDR
This work uses computational techniques to identify a high-performing neural network model that matches human performance on challenging object categorization tasks and shows that performance optimization—applied in a biologically appropriate model class—can be used to build quantitative predictive models of neural processing.
The perceptron: a probabilistic model for information storage and organization in the brain.
TLDR
This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.
What Is a Cognitive Map? Organizing Knowledge for Flexible Behavior
Using goal-driven deep learning models to understand sensory cortex
TLDR
It is outlined how the goal-driven HCNN approach can be used to delve even more deeply into understanding the development and organization of sensory cortical processing.
Group Equivariant Convolutional Networks
TLDR
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST.
Representation Learning: A Review and New Perspectives
TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies
TLDR
This work proposes a novel algorithm for unsupervised representation learning from piece-wise stationary visual data: Variational Autoencoder with Shared Embeddings (VASE), which automatically detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while simultaneously protecting previously learnt representations from catastrophic forgetting.
The geometry of hippocampal CA2 representations enables abstract coding of social familiarity and identity
TLDR
It is demonstrated that the geometry of dCA2 representations in neural activity space enables social familiarity, social identity, and spatial information to be readily disentangled.
SyMetric: Measuring the Quality of Learnt Hamiltonian Dynamics Inferred from Vision
TLDR
This work empirically highlight the problems with the existing measures and develops a set of new measures, including a binary indicator of whether the underlying Hamiltonian dynamics have been faithfully captured, which are called Symplecticity Metric or SyMetric .
Self-Supervised Learning Disentangled Group Representation as Feature
TLDR
This paper proposes an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM), which successfully grounds the abstract semantics and the group acting on them into concrete contrastive learning.
...
...