• Publications
  • Influence
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. Expand
Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons
TLDR
A neural network model is proposed and it is shown by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. Expand
Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity
TLDR
The results suggest that the experimentally observed spontaneous activity and trial-to-trial variability of cortical neurons are essential features of their information processing capability, since their functional role is to represent probability distributions rather than static neural codes. Expand
Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields
TLDR
Coulomb GANs are introduced, which pose the GAN learning problem as a potential field of charged particles, where generated samples are attracted to training set samples but repel each other, and it is proved that Coulomb GAns possess only one Nash equilibrium which is optimal in the sense that the model distribution equals the target distribution. Expand
STDP enables spiking neurons to detect hidden causes of their inputs
TLDR
It is shown here that STDP, in conjunction with a stochastic soft winner-take-all (WTA) circuit, induces spiking neurons to generate through their synaptic weights implicit internal models for subclasses (or "causes") of the high-dimensional spike patterns of hundreds of pre-synaptic neurons. Expand
STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning
TLDR
It is shown here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards, and the emergent computing capabilities of the model are demonstrated through several computer simulations. Expand
Speeding up Semantic Segmentation for Autonomous Driving
TLDR
A novel deep network architecture for image segmentation that keeps the high accuracy while being efficient enough for embedded devices is proposed, and achieves higher segmentation accuracy than other networks that are tailored to embedded devices. Expand
Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network
TLDR
It is demonstrated that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. Expand
Reward-Modulated Hebbian Learning of Decision Making
TLDR
This work casts the Bayesian-Hebb learning rule as reinforcement learning in which certain decisions are rewarded and proves that each synaptic weight will on average converge exponentially fast to the log-odd of receiving a reward when its pre- and postsynaptic neurons are active. Expand
...
1
2
3
...