Improving Discrete Latent Representations With Differentiable Approximation Bridges

@article{Ramapuram2020ImprovingDL,
  title={Improving Discrete Latent Representations With Differentiable Approximation Bridges},
  author={Jason Ramapuram and Russ Webb},
  journal={2020 International Joint Conference on Neural Networks (IJCNN)},
  year={2020},
  pages={1-10}
}
  • Jason Ramapuram, Russ Webb
  • Published 2020
  • Computer Science, Mathematics
  • 2020 International Joint Conference on Neural Networks (IJCNN)
Modern neural network training relies on piece-wise (sub-)differentiable functions in order to use backpropagation to update model parameters. In this work, we introduce a novel method to allow non-differentiable functions at intermediary layers of deep neural networks. We do so by training with a differentiable approximation bridge (DAB) neural network which approximates the non-differentiable forward function and provides gradient updates during backpropagation. We present empirical results… Expand
2 Citations

References

SHOWING 1-10 OF 69 REFERENCES
Neural Discrete Representation Learning
  • 585
  • PDF
Decoupled Greedy Learning of CNNs
  • 27
  • PDF
Auto-Encoding Variational Bayes
  • 11,340
  • Highly Influential
  • PDF
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
  • 994
  • Highly Influential
  • PDF
Self-Normalizing Neural Networks
  • 965
  • PDF
Adam: A Method for Stochastic Optimization
  • 61,209
  • Highly Influential
  • PDF
Training Neural Networks Using Features Replay
  • 32
  • PDF
Deep Residual Learning for Image Recognition
  • 62,798
  • PDF
Categorical Reparameterization with Gumbel-Softmax
  • 1,709
  • Highly Influential
  • PDF
Searching for Activation Functions
  • 830
...
1
2
3
4
5
...