#### Filter Results:

- Full text PDF available (14)

#### Publication Year

2011

2017

- This year (1)
- Last five years (14)

#### Publication Type

#### Co-author

#### Publication Venue

#### Brain Region

#### Key Phrases

#### Method

#### Organism

Learn More

- Eric Jang, Shixiang Gu, Ben Poole
- ArXiv
- 2016

Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution… (More)

- Vincent Dumoulin, Ishmael Belghazi, +4 authors Aaron C. Courville
- ArXiv
- 2016

We introduce the adversarially learned inference (ALI) model, which jointly learns a generation network and an inference network using an adversarial process. The generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial… (More)

- Luke Metz, Ben Poole, David Pfau, Jascha Sohl-Dickstein
- ArXiv
- 2016

We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal dis-criminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the… (More)

- Jascha Sohl-Dickstein, Ben Poole, Surya Ganguli
- ICML
- 2014

We present an algorithm for minimizing a sum of functions that combines the computational efficiency of stochastic gradient descent (SGD) with the second order curvature information leveraged by quasi-Newton methods. We unify these dis-parate approaches by maintaining an independent Hessian approximation for each contributing function in the sum. We… (More)

- John R. Anderson, Daniel Bothell, Jon M. Fincham, Abraham R. Anderson, Ben Poole, Yulin Qin
- J. Cognitive Neuroscience
- 2011

Part- and whole-task conditions were created by manipulating the presence of certain components of the Space Fortress video game. A cognitive model was created for two-part games that could be combined into a model that performed the whole game. The model generated predictions both for behavioral patterns and activation patterns in various brain regions.… (More)

We study the expressivity of deep neural networks with random weights. We provide several results, both theoretical and experimental, precisely characterizing their functional properties in terms of the depth and width of the network. In doing so, we illustrate inherent connections between the length of a latent trajectory, local neuron transitions, and… (More)

- Friedemann Zenke, Ben Poole, Surya Ganguli
- ArXiv
- 2017

Deep learning has led to remarkable advances when applied to problems where the data distribution does not change over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, and solve a diversity of tasks simultaneously. Furthermore, synapses in biological neurons are not simply real-valued scalars, but… (More)

- Jonathan T. Barron, Ben Poole
- ECCV
- 2016

We present the bilateral solver, a novel algorithm for edge-aware smoothing that combines the flexibility and speed of simple filtering approaches with the accuracy of domain-specific optimization algorithms. Our technique is capable of matching or improving upon state-of-the-art results on several different computer vision tasks (stereo, depth… (More)

We combine Riemannian geometry with the mean field theory of high dimensional chaos to study the nature of signal propagation in generic, deep neural networks with random weights. Our results reveal an order-to-chaos expressivity phase transition, with networks in the chaotic phase computing nonlinear functions whose global curvature grows exponentially… (More)

- Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli
- ArXiv
- 2014

Autoencoders have emerged as a useful framework for unsupervised learning of internal representations, and a wide variety of apparently conceptually disparate regularization techniques have been proposed to generate useful features. Here we extend existing denoising autoencoders to additionally inject noise before the non-linearity, and at the hidden unit… (More)