Sparse neural codes and convexity

@article{Jeffs2019SparseNC,
  title={Sparse neural codes and convexity},
  author={R. Amzi Jeffs and Mohamed Omar and Natchanon Suaysom and Aleina Wachtel and Nora Youngs},
  journal={Involve, a Journal of Mathematics},
  year={2019}
}
Determining how the brain stores information is one of the most pressing problems in neuroscience. In many instances, the collection of stimuli for a given neuron can be modeled by a convex set in $\mathbb{R}^d$. Combinatorial objects known as \emph{neural codes} can then be used to extract features of the space covered by these convex regions. We apply results from convex geometry to determine which neural codes can be realized by arrangements of open convex sets. We restrict our attention… 

Figures from this paper

Open and Closed Convexity of Sparse Neural Codes
TLDR
This work shows that closed convex codes do not possess the same property, and disproves a conjecture of Goldrup and Phillipson, and presents an example of a code that is neither open convex norclosed convex.
Non-Monotonicity of Closed Convexity in Neural Codes
TLDR
This work demonstrates that adding non-maximal codewords can only increase the open embedding dimension by 1, and proves a conjecture of Goldrup and Phillipson that adding a single such codeword can increase the closed embedding dimensions by an arbitrarily large amount.
Neural codes, decidability, and a new local obstruction to convexity
TLDR
Giusti and Itskov prove that convex neural codes have no "local obstructions," which are defined via the topology of a code's simplicial complex, and reveal a stronger type of local obstruction that prevents a code from being convex, and prove that the corresponding decision problem is NP-hard.
C O ] 2 A pr 2 02 1 N ON-MONOTONICITY OF CLOSED CONVEXITY IN NEURAL CODES
TLDR
This work demonstrates that adding non-maximal codewords can only increase the open embedding dimension by 1, and disproves a conjecture of Goldrup and Phillipson, and presents an example of a code that is neither open convex nor closed convex.
Periodic Codes and Sound Localization
TLDR
Property of periodic codes help to explain several aspects of the behavior observed in the sound localization system of the barn owl, including common errors in localizing pure tones.
Gröbner bases of neural ideals
TLDR
It is proved that if the canonical form of a neural ideal is a Gr\"obner basis, then it is the universal Gr\"OBner basis (that is, the union of all reduced Gr \"obner bases).
Periodic neural codes and sound localization in barn owls
TLDR
Property of periodic codes help to explain several aspects of the behavior observed in the sound localization system of the barn owl, including common errors in localizing pure tones.
Convex Union Representability and Convex Codes
We introduce and investigate $d$-convex union representable complexes: the simplicial complexes that arise as the nerve of a finite collection of convex open sets in ${\mathbb{R}}^d$ whose union is
Convexity of Neural Codes
TLDR
This work considers neural codes arising from place cells, which are neurons that track an animal's position in space, and examines algebraic objects associated to neural codes, and completely characterize a certain class of maps between these objects.
Embedding dimension phenomena in intersection complete codes
  • R. Jeffs
  • Computer Science, Mathematics
    Selecta Mathematica
  • 2021
TLDR
Tverberg's theorem is used to study the structure of "$k$-flexible" sunflowers, and consequently obtain new lower bounds on $\text{odim}(\mathcal C)$ for intersection complete codes $\Mathcal C".
...
...

References

SHOWING 1-10 OF 14 REFERENCES
What Makes a Neural Code Convex?
TLDR
This work provides a complete characterization of local obstructions to convexity and defines max intersection-complete codes, a family guaranteed to have noLocal obstructions, a significant advance in understanding the intrinsic combinatorial properties of convex codes.
On Open and Closed Convex Codes
TLDR
It is found that a code that can be realized by a collection of open convex set may or may not be realizable by closed convex sets, and vice versa, establishing that open conveX and closed conveX codes are distinct classes.
Obstructions to convexity in neural codes
Neural codes, decidability, and a new local obstruction to convexity
TLDR
Giusti and Itskov prove that convex neural codes have no "local obstructions," which are defined via the topology of a code's simplicial complex, and reveal a stronger type of local obstruction that prevents a code from being convex, and prove that the corresponding decision problem is NP-hard.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic Structure of Neural Codes
TLDR
The main finding is that the neural ring and a related neural ideal can be expressed in a “canonical form” that directly translates to a minimal description of the receptive field structure intrinsic to the code, providing the groundwork for inferring stimulus space features from neural activity alone.
Combinatorial Neural Codes from a Mathematical Coding Theory Perspective
TLDR
It is suggested that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Lectures on discrete geometry
  • J. Matousek
  • Mathematics
    Graduate texts in mathematics
  • 2002
TLDR
This book is primarily a textbook introduction to various areas of discrete geometry, in which several key results and methods are explained, in an accessible and concrete manner, in each area.
Sparse, Decorrelated Odor Coding in the Mushroom Body Enhances Learned Odor Discrimination
TLDR
It is demonstrated that sparseness is controlled by a negative feedback circuit between Kenyon cells and the GABAergic anterior paired lateral (APL) neuron, and feedback inhibition suppresses Kenyon cell activity to maintain sparse, decorrelated odor coding and thus the odor specificity of memories.
...
...