Corpus ID: 401157

Dither is Better than Dropout for Regularising Deep Neural Networks

@article{Simpson2015DitherIB,
  title={Dither is Better than Dropout for Regularising Deep Neural Networks},
  author={Andrew J. R. Simpson},
  journal={ArXiv},
  year={2015},
  volume={abs/1508.04826}
}
  • Andrew J. R. Simpson
  • Published 2015
  • Mathematics, Computer Science
  • ArXiv
  • Regularisation of deep neural networks (DNN) during training is critical to performance. By far the most popular method is known as dropout. Here, cast through the prism of signal processing theory, we compare and contrast the regularisation effects of dropout with those of dither. We illustrate some serious inherent limitations of dropout and demonstrate that dither provides a more effective regulariser. 

    Figures and Topics from this paper.

    Parallel Dither and Dropout for Regularising Deep Neural Networks
    6
    Taming the ReLU with Parallel Dither in a Deep Neural Network
    5
    A Self-Improving Convolution Neural Network for the Classification of Hyperspectral Data
    70

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 10 REFERENCES
    Over-Sampling in a Deep Neural Network
    19
    Abstract Learning via Demodulation in a Deep Neural Network
    22
    Learning Deep Architectures for AI
    6113
    A Fast Learning Algorithm for Deep Belief Nets
    10811
    Abstract Learning via Demodulat ion in a Deep Neural Network”, arxiv.org
    • 2015
    Learning deep architectures for AI ”, Foundations and Trends in Machine Learning 2:1–127
    • 2009
    Over-Sampling in a Deep Neural N etwork, arxiv.org
    • 2015
    Over-Sampling in a Deep Neural N etwork, arxiv.org abs/1502.03648
    • 2015