#### Filter Results:

- Full text PDF available (18)

#### Publication Year

2011

2017

- This year (7)
- Last 5 years (19)
- Last 10 years (20)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Brain Region

#### Key Phrases

#### Method

#### Organism

Learn More

- Vincent Dumoulin, Ishmael Belghazi, +4 authors Aaron C. Courville
- ArXiv
- 2016

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial… (More)

- Eric Jang, Shixiang Gu, Ben Poole
- ArXiv
- 2016

Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution… (More)

- Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
- ArXiv
- 2016

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator’s objective, which is ideal but infeasible in practice, and using the current value of the… (More)

- Jonathan T. Barron, Ben Poole
- ECCV
- 2016

We present the bilateral solver, a novel algorithm for edge-aware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth… (More)

- Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli
- ICML
- 2014

We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these disparate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We maintain… (More)

- John R. Anderson, Daniel Bothell, Jon M. Fincham, Abraham R. Anderson, Ben Poole, Yulin Qin
- J. Cognitive Neuroscience
- 2011

Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions.… (More)

- Jonathan Chit Sing Leong, Jennifer Judson Esch, Ben Poole, Surya Ganguli, Thomas Robert Clandinin
- The Journal of neuroscience : the official…
- 2016

UNLABELLED
Across animal phyla, motion vision relies on neurons that respond preferentially to stimuli moving in one, preferred direction over the opposite, null direction. In the elementary motion detector of Drosophila, direction selectivity emerges in two neuron types, T4 and T5, but the computational algorithm underlying this selectivity remains… (More)

- Friedemann Zenke, Ben Poole, Surya Ganguli
- ArXiv
- 2017

Deep learning has led to remarkable advances when applied to problems where the data distribution does not change over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, and solve a diversity of tasks simultaneously. Furthermore, synapses in biological neurons are not simply real-valued scalars, but… (More)

We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially… (More)

We present a framework to understand GAN training as alternating density ratio estimation, and approximate divergence minimization. This provides an interpretation for the mismatched GAN generator and discriminator objectives often used in practice, and explains the problem of poor sample diversity. Further, we derive a family of generator objectives that… (More)