Fabio Anselmi

Learn More
Extracellular ATP controls various signaling systems including propagation of intercellular Ca(2+) signals (ICS). Connexin hemichannels, P2x7 receptors (P2x7Rs), pannexin channels, anion channels, vesicles, and transporters are putative conduits for ATP release, but their involvement in ICS remains controversial. We investigated ICS in cochlear organotypic(More)
The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n→∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on(More)
We discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other. The mathematical results here sharpen some of the key claims of i-theory, a recent theory of feedforward(More)
Connexin 26 (Cx26) and connexin 30 (Cx30) are encoded by two genes (GJB2 and GJB6, respectively) that are found within 50 kb in the same complex deafness locus, DFNB1. Immunocytochemistry and quantitative PCR analysis of Cx30 KO mouse cultures revealed that Cx26 is downregulated at the protein level and at the mRNA level in nonsensory cells located between(More)
Connexin 26 (Cx26) and connexin 30 (Cx30) form hemichannels that release ATP from the endolymphatic surface of cochlear supporting and epithelial cells and also form gap junction (GJ) channels that allow the concomitant intercellular diffusion of Ca2+ mobilizing second messengers. Released ATP in turn activates G-protein coupled P2Y2 and P2Y4 receptors,(More)
The present phase of Machine Learning is characterized by supervised learning algorithms relying on large sets of labeled examples (n→∞). The next phase is likely to focus on algorithms capable of learning from very few labeled examples (n → 1), like humans seem able to do. We propose an approach to this problem and describe the underlying theory, based on(More)
In i-theory a typical layer of a hierarchical architecture consists of HW modules pooling the dot products of the inputs to the layer with the transformations of a few templates under a group. Such layers include as special cases the convolutional layers of Deep Convolutional Networks (DCNs) as well as the non-convolutional layers (when the group contains(More)
We propose that the main computational goal of the ventral stream is to provide a hierarchical representation of new objects/images which is invariant to transformations, stable with respect to small perturbations, and discriminative for recognition, and that this representation may be continuously learned in an unsupervised way during development and(More)
This paper explores the theoretical consequences of a simple assumption: the computational goal of the feedforward path in the ventral stream – from V1, V2, V4 and to IT – is to discount image transformations, after learning them during development. Part I assumes that a basic neural operation consists of dot products between input vectors and synaptic(More)