# Sparse neural codes and convexity

@article{Jeffs2019SparseNC,
title={Sparse neural codes and convexity},
author={R. Amzi Jeffs and Mohamed Omar and Natchanon Suaysom and Aleina Wachtel and Nora Youngs},
journal={Involve, a Journal of Mathematics},
year={2019}
}
• Published 1 November 2015
• Computer Science
• Involve, a Journal of Mathematics
Determining how the brain stores information is one of the most pressing problems in neuroscience. In many instances, the collection of stimuli for a given neuron can be modeled by a convex set in $\mathbb{R}^d$. Combinatorial objects known as \emph{neural codes} can then be used to extract features of the space covered by these convex regions. We apply results from convex geometry to determine which neural codes can be realized by arrangements of open convex sets. We restrict our attention…
16 Citations

## Figures from this paper

Open and Closed Convexity of Sparse Neural Codes
• Computer Science
• 2019
This work shows that closed convex codes do not possess the same property, and disproves a conjecture of Goldrup and Phillipson, and presents an example of a code that is neither open convex norclosed convex.
Non-Monotonicity of Closed Convexity in Neural Codes
• Computer Science
Vietnam Journal of Mathematics
• 2021
This work demonstrates that adding non-maximal codewords can only increase the open embedding dimension by 1, and proves a conjecture of Goldrup and Phillipson that adding a single such codeword can increase the closed embedding dimensions by an arbitrarily large amount.
Neural codes, decidability, and a new local obstruction to convexity
• Computer Science
SIAM J. Appl. Algebra Geom.
• 2019
Giusti and Itskov prove that convex neural codes have no "local obstructions," which are defined via the topology of a code's simplicial complex, and reveal a stronger type of local obstruction that prevents a code from being convex, and prove that the corresponding decision problem is NP-hard.
C O ] 2 A pr 2 02 1 N ON-MONOTONICITY OF CLOSED CONVEXITY IN NEURAL CODES
• Computer Science
• 2021
This work demonstrates that adding non-maximal codewords can only increase the open embedding dimension by 1, and disproves a conjecture of Goldrup and Phillipson, and presents an example of a code that is neither open convex nor closed convex.
Periodic Codes and Sound Localization
• Computer Science
• 2019
Property of periodic codes help to explain several aspects of the behavior observed in the sound localization system of the barn owl, including common errors in localizing pure tones.
Gröbner bases of neural ideals
• Mathematics, Computer Science
Int. J. Algebra Comput.
• 2018
It is proved that if the canonical form of a neural ideal is a Gr\"obner basis, then it is the universal Gr\"OBner basis (that is, the union of all reduced Gr \"obner bases).
Periodic neural codes and sound localization in barn owls
• Computer Science
Involve, a Journal of Mathematics
• 2022
Property of periodic codes help to explain several aspects of the behavior observed in the sound localization system of the barn owl, including common errors in localizing pure tones.
Convex Union Representability and Convex Codes
• Mathematics
International Mathematics Research Notices
• 2019
We introduce and investigate $d$-convex union representable complexes: the simplicial complexes that arise as the nerve of a finite collection of convex open sets in ${\mathbb{R}}^d$ whose union is
Convexity of Neural Codes
This work considers neural codes arising from place cells, which are neurons that track an animal's position in space, and examines algebraic objects associated to neural codes, and completely characterize a certain class of maps between these objects.
Embedding dimension phenomena in intersection complete codes
• R. Jeffs
• Computer Science, Mathematics
Selecta Mathematica
• 2021
Tverberg's theorem is used to study the structure of "$k$-flexible" sunflowers, and consequently obtain new lower bounds on $\text{odim}(\mathcal C)$ for intersection complete codes \$\Mathcal C".

## References

SHOWING 1-10 OF 14 REFERENCES
What Makes a Neural Code Convex?
• Computer Science
SIAM J. Appl. Algebra Geom.
• 2017
This work provides a complete characterization of local obstructions to convexity and defines max intersection-complete codes, a family guaranteed to have noLocal obstructions, a significant advance in understanding the intrinsic combinatorial properties of convex codes.
On Open and Closed Convex Codes
• Computer Science
Discret. Comput. Geom.
• 2019
It is found that a code that can be realized by a collection of open convex set may or may not be realizable by closed convex sets, and vice versa, establishing that open conveX and closed conveX codes are distinct classes.
Obstructions to convexity in neural codes
• Computer Science
• 2017
Neural codes, decidability, and a new local obstruction to convexity
• Computer Science
SIAM J. Appl. Algebra Geom.
• 2019
Giusti and Itskov prove that convex neural codes have no "local obstructions," which are defined via the topology of a code's simplicial complex, and reveal a stronger type of local obstruction that prevents a code from being convex, and prove that the corresponding decision problem is NP-hard.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic Structure of Neural Codes
• Computer Science, Psychology
Bulletin of mathematical biology
• 2013
The main finding is that the neural ring and a related neural ideal can be expressed in a “canonical form” that directly translates to a minimal description of the receptive field structure intrinsic to the code, providing the groundwork for inferring stimulus space features from neural activity alone.
Combinatorial Neural Codes from a Mathematical Coding Theory Perspective
• Computer Science
Neural Computation
• 2013
It is suggested that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Lectures on discrete geometry
• J. Matousek
• Mathematics