Learning may need only a few bits of synaptic precision.

@article{Baldassi2016LearningMN,
  title={Learning may need only a few bits of synaptic precision.},
  author={Carlo Baldassi and Federica Gerace and Carlo Lucibello and Luca Saglietti and Riccardo Zecchina},
  journal={Physical review. E},
  year={2016},
  volume={93 5},
  pages={
          052313
        }
}
Learning in neural networks poses peculiar challenges when using discretized rather then continuous synaptic states. The choice of discrete synapses is motivated by biological reasoning and experiments, and possibly by hardware implementation considerations as well. In this paper we extend a previous large deviations analysis which unveiled the existence of peculiar dense regions in the space of synaptic states which accounts for the possibility of learning efficiently in networks with binary… Expand

Figures and Topics from this paper

On the role of synaptic stochasticity in training low-precision neural networks
TLDR
It is shown that a neural network model with stochastic binary weights naturally gives prominence to exponentially rare dense regions of solutions with a number of desirable properties such as robustness and good generalization performance, while typical solutions are isolated and hard to find. Expand
Statistical physics of neural systems
TLDR
This work represents learning as an optimization problem, actually implementing a local search, in the synaptic space, of specific configurations, known as solutions and making a neural network able to accomplish a series of different tasks. Expand
Understanding the computational difficulty of a binary-weight perceptron and the advantage of input sparseness
TLDR
A perceptron model, which associates binary input patterns with outputs using binary (0 or 1) weights, modeling a single neuron receiving excitatory inputs is studied, highlighting the heterogeneity of learning dynamics of weights. Expand
Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes
TLDR
It is shown that there are regions of the optimization landscape that are both robust and accessible and that their existence is crucial to achieve good performance on a class of particularly difficult learning problems, and an explanation of this good performance is proposed in terms of a nonequilibrium statistical physics framework. Expand
Spike-Based Plasticity Circuits for Always-on On-Line Learning in Neuromorphic Systems
  • M. Payvand, G. Indiveri
  • Computer Science
  • 2019 IEEE International Symposium on Circuits and Systems (ISCAS)
  • 2019
TLDR
This paper proposes spike-based circuits based on a local gradient-descent based learning rule that comprise also this additional “stop-learning” feature and that have a wide range of configurability options over the learning parameters. Expand
A CHARACTERIZATION OF THE EDGE OF CRITICALITY IN BINARY ECHO STATE NETWORKS
TLDR
Binary ESNs are proposed, which are architecturally equivalent to standard ESNs but consider binary activation functions and binary recurrent weights, and a theoretical explanation for the fact that the variance of the input plays a major role in characterizing the EoC. Expand
CORSO DI LAUREA MAGISTRALE IN FISICA OUT-OF-EQUILIBRIUM ANALYSIS OF SIMPLE NEURAL NETWORKS
We consider a novel approach to learning in neural networks with discrete synapses [1, 2, 3] and discuss its possible extensions to simple continuous neural networks. The problem of learning isExpand
From Statistical Physics to Algorithms in Deep Neural Systems
TLDR
In the last few years artificial neural networks were challenged to solve more and more complex tasks, being able, for example, to correctly classify images in 1000 classes, to win over the world champion of Go, to understand the human speech etc. Expand
Shaping the learning landscape in neural networks around wide flat minima
TLDR
This paper shows that the error loss function presents few extremely wide flat minima (WFM) which coexist with narrower minima and critical points and shows that a slow reduction of the norm of the weights along the learning process also leads to WFM. Expand
Solvable Model for Inheriting the Regularization through Knowledge Distillation
TLDR
A statistical physics framework is introduced that allows an analytic characterization of the properties of knowledge distillation (KD) in shallow neural networks and it is shown that, through KD, the regularization properties of the larger teacher model can be inherited by the smaller student. Expand
...
1
2
...

References

SHOWING 1-10 OF 33 REFERENCES
Efficient supervised learning in networks with binary synapses
TLDR
This work developed and studied a neurobiologically plausible on-line learning algorithm that is derived from Belief Propagation algorithms that performs remarkably well in a model neuron with N binary synapses, and a discrete number of 'hidden' states per synapse, that has to learn a random classification problem. Expand
Subdominant Dense Clusters Allow for Simple Learning and High Computational Performance in Neural Networks with Discrete Synapses.
TLDR
It is shown that discrete synaptic weights can be efficiently used for learning in large scale neural systems, and lead to unanticipated computational performance, and that these synaptic configurations are robust to perturbations and generalize better than typical solutions. Expand
Generalization Learning in a Perceptron with Binary Synapses
AbstractWe consider the generalization problem for a perceptron with binary synapses, implementing the Stochastic Belief-Propagation-Inspired (SBPI) learning algorithm which we proposed earlier, andExpand
Optimal Information Storage and the Distribution of Synaptic Weights Perceptron versus Purkinje Cell
TLDR
The perceptron is analyzed, a prototypical feedforward neural network, and the optimal synaptic weight distribution for a perceptron with excitatory synapses is obtained, suggesting that the Purkinje cell can learn up to 5 kilobytes of information, in the form of 40,000 input-output associations. Expand
Learning from examples in large neural networks.
TLDR
Numerical results on training in layered neural networks indicate that the generalization error improves gradually in some cases, and sharply in others, and statistical mechanics is used to study generalization curves in large layered networks. Expand
Capacity of neural networks with discrete synaptic couplings
The authors study the optimal storage capacity of neural networks with discrete local constraints on the synaptic couplings Jij. Models with such constraints include those with binary couplingsExpand
Origin of the computational hardness for learning with binary synapses
  • H. Huang, Y. Kabashima
  • Mathematics, Medicine
  • Physical review. E, Statistical, nonlinear, and soft matter physics
  • 2014
TLDR
This work analytically derive the Franz-Parisi potential for the binary perceptron problem by starting from an equilibrium solution of weights and exploring the weight space structure around it, which reveals the geometrical organization of theWeight space is composed of isolated solutions, rather than clusters of exponentially many close-by solutions. Expand
Hippocampal Spine Head Sizes Are Highly Precise
TLDR
In an electron microscopic reconstruction of hippocampal neuropil, single axons making two or more synaptic contacts onto the same dendrites which would have shared histories of presynaptic and postsynaptic activity were found. Expand
Learning by message-passing in networks of discrete synapses
We show that a message-passing process allows us to store in binary "material" synapses a number of random patterns which almost saturate the information theoretic bounds. We apply the learningExpand
A Max-Sum algorithm for training discrete neural networks
TLDR
The algorithm is a variant of the so-called Max-Sum algorithm that performs as well as BP on binary perceptron learning problems, and may be better suited to address the problem on fully-connected two-layer networks, since inherent symmetries in two layer networks are naturally broken using the MS approach. Expand
...
1
2
3
4
...