# Dither is Better than Dropout for Regularising Deep Neural Networks

@article{Simpson2015DitherIB, title={Dither is Better than Dropout for Regularising Deep Neural Networks}, author={Andrew J. R. Simpson}, journal={ArXiv}, year={2015}, volume={abs/1508.04826} }

Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and contrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a more effective regulariser.

11 Citations

"Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training Deep Neural Networks

- Computer Science
- 2015

4

Uniform Learning in a Deep Neural Network via "Oddball" Stochastic Gradient Descent

- Computer Science
- 2015

2

A Self-Improving Convolution Neural Network for the Classification of Hyperspectral Data

- Computer Science
- 2016

70

Use it or Lose it: Selective Memory and Forgetting in a Perpetual Learning Machine

- Computer Science, Mathematics
- 2015

2

#### References

##### Publications referenced by this paper.

SHOWING 1-10 OF 10 REFERENCES

Abstract Learning via Demodulat ion in a Deep Neural Network”, arxiv.org

- 2015

Learning deep architectures for AI ”, Foundations and Trends in Machine Learning 2:1–127

- 2009

Over-Sampling in a Deep Neural N etwork, arxiv.org

- 2015

Over-Sampling in a Deep Neural N etwork, arxiv.org abs/1502.03648

- 2015