#### Filter Results:

#### Publication Year

2010

2016

#### Publication Type

#### Co-author

#### Publication Venue

#### Data Set Used

#### Key Phrases

Learn More

- Marcin Andrychowicz, Misha Denil, +4 authors Nando de Freitas
- NIPS
- 2016

The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in… (More)

We present an approach to dimensionality reduction for neural data that is convex, does not make strong assumptions about dynamics, does not require averaging over many trials and is extensible to more complex statistical models that combine local and global influences. The basic method can be seen as an extension of PCA to the exponential family using… (More)

- Eftychios A. Pnevmatikakis, Daniel Soudry, +13 authors Liam Paninski
- Neuron
- 2016

We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative… (More)

- Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
- ArXiv
- 2016

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis-criminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the… (More)

- Eftychios A. Pnevmatikakis, Yuanjun Gao, +6 authors Liam Paninski
- 2014

We present a structured matrix factorization approach to analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity of each neuron from the slow dynamics of the calcium indicator. The matrix… (More)

- Nicholas Bartlett, David Pfau, Frank Wood
- ICML
- 2010

We propose a novel dependent hierarchical Pitman-Yor process model for discrete data. An incremental Monte Carlo inference procedure for this model is developed. We show that inference in this model can be performed in constant space and linear time. The model is demonstrated in a discrete sequence prediction task where it is shown to achieve state of the… (More)

- David Pfau, Nicholas Bartlett, Frank Wood
- NIPS
- 2010

We propose a novel Bayesian nonparametric approach to learning with probabilis-tic deterministic finite automata (PDFA). We define and develop a sampler for a PDFA with an infinite number of states which we call the probabilistic determin-istic infinite automata (PDIA). Posterior predictive inference in this model, given a finite training sequence, can be… (More)

- David Pfau, Oriol Vinyals
- ArXiv
- 2016

Both generative adversarial networks (GAN) in unsupervised learning and actor-critic methods in reinforcement learning (RL) have gained a reputation for being difficult to optimize. Practitioners in both fields have amassed a large number of strategies to mitigate these instabilities and improve training. Here we show that GANs can be viewed as actor-critic… (More)

- Finale Doshi-Velez, David Pfau, Frank Wood, Nicholas Roy
- IEEE Transactions on Pattern Analysis and Machine…
- 2015

Making intelligent decisions from incomplete information is critical in many applications: for example, robots must choose actions based on imperfect sensors, and speech-based interfaces must infer a user’s needs from noisy microphone inputs. What makes these tasks hard is that often we do not have a natural representation with which to model the… (More)

- Chrisantha Fernando, Dylan Banarse, +5 authors Daan Wierstra
- GECCO
- 2016

In this work we introduce a differentiable version of the Compositional Pattern Producing Network, called the DPPN. Unlike a standard CPPN, the topology of a DPPN is evolved but the weights are learned. A Lamarckian algorithm, that combines evolution and learning, produces DPPNs to reconstruct an image. Our main result is that DPPNs can be evolved/trained… (More)