Corpus ID: 13985426

Estimating or Propagating Gradients Through Stochastic Neurons

@article{Bengio2013EstimatingOP,
  title={Estimating or Propagating Gradients Through Stochastic Neurons},
  author={Yoshua Bengio},
  journal={ArXiv},
  year={2013},
  volume={abs/1305.2982}
}
  • Yoshua Bengio
  • Published 2013
  • Computer Science
  • ArXiv
  • Stochastic neurons can be useful for a number of reasons in deep learning models, but in many cases they pose a challenging problem: how to estimate the gradient of a loss function with respect to the input of such s tochastic neurons, i.e., can we “back-propagate” through these stochastic neurons? We examine this question, existing approaches, and present two novel families of solutions, applicable in different settings. In particular, it is demonstrate d that a simple biologically plausible… CONTINUE READING
    58 Citations
    GSNs : Generative Stochastic Networks
    • 29
    • PDF
    Noisy Activation Functions
    • 158
    • PDF
    Bounding the Test Log-Likelihood of Generative Models
    • 20
    • PDF
    Training Neural Networks with Implicit Variance
    • 7
    • PDF
    Techniques for Learning Binary Stochastic Feedforward Neural Networks
    • 101
    • Highly Influenced
    • PDF
    Difference Target Propagation
    • 162
    • PDF
    Stochastic Quantization for Learning Accurate Low-Bit Deep Neural Networks
    • 6
    Mollifying Networks
    • 29
    • PDF

    References

    SHOWING 1-10 OF 23 REFERENCES
    A Fast Learning Algorithm for Deep Belief Nets
    • 11,576
    • PDF
    Gradient learning in spiking neural networks by dynamic perturbation of conductances.
    • 104
    • PDF
    Learning Deep Architectures for AI
    • 6,458
    • PDF
    Maxout Networks
    • 1,610
    • PDF
    Deep Learning of Representations: Looking Forward
    • 434
    • PDF
    Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
    • 1,742
    • Highly Influential
    • PDF