• Corpus ID: 202539957

Stable fixed points of combinatorial threshold-linear networks.

@article{Curto2019StableFP,
  title={Stable fixed points of combinatorial threshold-linear networks.},
  author={Carina Curto and Jesse T. Geneson and Katherine Morrison},
  journal={arXiv: Neurons and Cognition},
  year={2019}
}
Combinatorial threshold-linear networks (CTLNs) are a special class of neural networks whose dynamics are tightly controlled by an underlying directed graph. In prior work, we showed that target-free cliques of the graph correspond to stable fixed points of the dynamics, and we conjectured that these are the only stable fixed points allowed. In this paper we prove that the conjecture holds in a variety of special cases, including for graphs with very strong inhibition and graphs of size $n \leq… 

Figures from this paper

Core motifs predict dynamic attractors in combinatorial threshold-linear networks
TLDR
The results suggest that graphical properties of the connectivity can be used to predict a network’s complex repertoire of nonlinear dynamics.
Nerve theorems for fixed points of neural networks
TLDR
It is found that the nerve not only constrains the fixed points of CTLNs, but also gives insight into the transient and asymptotic dynamics, because the flow of activity in the network tends to follow the edges of the nerve.
Sequential Attractors in Combinatorial Threshold-Linear Networks
TLDR
This work finds that architectures based on generalizations of cycle graphs produce limit cycle attractors that can be activated to generate transient or persistent sequences, and shows how the structure of certain architectures gives insight into the sequential dynamics of the corresponding attractor.
Homotopy Theoretic and Categorical Models of Neural Information Networks
TLDR
A novel mathematical formalism for the modeling of neural information networks endowed with additional structure in the form of assignments of resources, either computational or metabolic or informational, is developed.
Topological Model of Neural Information Networks
This is a brief overview of an ongoing research project, involving topological models of neural information networks and the development of new versions of associated information measures that can be

References

SHOWING 1-10 OF 16 REFERENCES
Fixed Points of Competitive Threshold-Linear Networks
TLDR
This work provides two novel characterizations for the set of fixed points of a competitive TLN in terms of a simple sign condition and a concept of domination, which provide the foundation for a kind of graphical calculus to infer features of the dynamics from a network's connectivity.
Pattern Completion in Symmetric Threshold-Linear Networks
TLDR
This work characterize stable fixed points of general threshold-linear networks with constant external drive and discover constraints on the coexistence of fixed points involving different subsets of active neurons.
Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
TLDR
By viewing permitted sets as memories stored in the synaptic connections, this work provides a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks.
Diversity of emergent dynamics in competitive threshold-linear networks: a preliminary report
TLDR
This work finds conditions that guarantee the absence of steady states, while maintaining bounded activity, which lead to a combinatorial family of competitive threshold-linear networks, parametrized by a simple directed graph.
Flexible Memory Networks
TLDR
This work develops some matrix-theoretic tools and presents them in a self-contained section independent of the neuroscience context, finding a close connection between maximally flexible networks and rank 1 matrices.
Selectively Grouping Neurons in Recurrent Networks of Lateral Inhibition
TLDR
This work shows that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks, and calculates the optimal sparsity of the groups (maximizing group entropy), and believes the results are the first steps toward attractor theories in hybrid analog-digital networks.
The worst-case time complexity for generating all maximal cliques and computational experiments
Encoding Binary Neural Codes in Networks of Threshold-Linear Neurons
TLDR
A simple encoding rule is introduced that selectively turns “on” synapses between neurons that coappear in one or more patterns, and certain types of neural codes are natural in the context of these networks, meaning that the full code can be accurately learned from a highly undersampled set of patterns.
Algorithm 457: finding all cliques of an undirected graph
TLDR
Two backtracking algorithms are presented, using a branchand-bound technique [4] to cut off branches that cannot lead to a clique, and generates cliques in a rather unpredictable order in an attempt to minimize the number of branches to be traversed.
Pattern retrieval in threshold-linear associative nets.
TLDR
Results from simulations of pattern retrieval in a large-scale, sparsely connected network of threshold-linear neurons are presented, showing the system is capable of retrieving states strongly correlated with one of the stored patterns even when the initial state is a highly degraded version of one of these patterns.
...
...