#### Filter Results:

- Full text PDF available (9)

#### Publication Year

2016

2017

- This year (6)
- Last 5 years (10)
- Last 10 years (10)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

We introduce a general and simple structural design called “Multiplicative Integration” (MI) to improve recurrent neural networks (RNNs). MI changes the way in which information from difference sources flows and is integrated in the computational building block of an RNN, while introducing almost no extra parameters. The new structure can be easily embedded… (More)

- Yuhuai Wu, Yuri Burda, Ruslan Salakhutdinov, Roger B. Grosse
- ArXiv
- 2016

The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks,… (More)

- Saizheng Zhang, Yuhuai Wu, +4 authors Yoshua Bengio
- NIPS
- 2016

In this paper, we systematically analyse the connecting architectures of recurrent neural networks (RNNs). Our main contribution is twofold: first, we present a rigorous graphtheoretic framework describing the connecting architectures of RNNs in general. Second, we propose three architecture complexity measures of RNNs: (a) the recurrent depth, which… (More)

- Yuhuai Wu, Elman Mansimov, Shun Liao, Roger B. Grosse, Jimmy Ba
- ArXiv
- 2017

In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we… (More)

We investigate the parameter-space geometry of recurrent neural networks (RNNs), and develop an adaptation of path-SGD optimization method, attuned to this geometry, that can learn plain RNNs with ReLU activations. On several datasets that require capturing long-term dependency structure, we show that path-SGD can significantly improve trainability of ReLU… (More)

- Geoffrey Roeder, Yuhuai Wu, David K. Duvenaud
- ArXiv
- 2017

We propose a simple and general variant of the standard reparameterized gradient estimator for the variational evidence lower bound. Specifically, we remove a part of the total derivative with respect to the variational parameters that corresponds to the score function. Removing this term produces an unbiased gradient estimator whose variance approaches… (More)

- Yoshua Bengio, Thomas Mesnard, Asja Fischer, Saizheng Zhang, Yuhuai Wu
- Neural Computation
- 2017

We show that Langevin Markov chain Monte Carlo inference in an energy-based model with latent variables has the property that the early steps of inference, starting from a stationary point, correspond to propagating error gradients into internal layers, similar to backpropagation. The backpropagated error is with respect to output units that have received… (More)

Compared to the REINFORCE gradient estimator, the reparameterization trick usually gives lower-variance estimators. We propose a simple variant of the standard reparameterized gradient estimator for the evidence lower bound that has even lower variance under certain circumstances. Specifically, we decompose the derivative with respect to the variational… (More)

Gradient-based optimization is the foundation of deep learning and reinforcement learning. Even when the mechanism being optimized is unknown or not differentiable, optimization using high-variance or biased gradient estimates is still often the best strategy. We introduce a general framework for learning low-variance, unbiased gradient estimators for… (More)

- Jianping Liu, Yun Zhu, Yuhuai Wu, Yan Liu, Zhaowei Teng, Yinglu Hao
- Neuropsychiatric disease and treatment
- 2017

Stroke, when poor blood flow to the brain results in cell death, is the third leading cause of disability and mortality worldwide, and appears as an unequal distribution in the global population. The cumulative risk of recurrence varies greatly up to 10 years after the first stroke. Carotid atherosclerosis is a major risk factor for stroke. The aim of this… (More)

- ‹
- 1
- ›