Fast dropout training

@inproceedings{Wang2013FastDT,
  title={Fast dropout training},
  author={Sida I. Wang and Christopher D. Manning},
  booktitle={ICML},
  year={2013}
}
Preventing feature co-adaptation by encouraging independent contributions from different features often improves classification and regression performance. Dropout training (Hinton et al., 2012) does this by randomly dropping out (zeroing) hidden units and input features during training of neural networks. However, repeatedly sampling a random subset of input features makes training much slower. Based on an examination of the implied objective function of dropout training, we show how to do… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 166 CITATIONS, ESTIMATED 37% COVERAGE

453 Citations

050100'14'16'18
Citations per Year
Semantic Scholar estimates that this publication has 453 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
SHOWING 1-10 OF 16 REFERENCES

SemiSupervised Recursive Autoencoders for Predicting Sentiment Distributions

  • Socher, Richard, +7 authors D Christopher
  • In Proceedings of EMNLP,
  • 2011

Computing bounds on the expected maximum of correlated normal variables

  • Ross, Andrew
  • Methodology and Computing in Applied Probability,
  • 2010

Similar Papers

Loading similar papers…