Corpus ID: 6878699

Comparison of Training Methods for Deep Neural Networks

@article{Glauner2015ComparisonOT,
  title={Comparison of Training Methods for Deep Neural Networks},
  author={Patrick O. Glauner},
  journal={ArXiv},
  year={2015},
  volume={abs/1504.06825}
}
  • Patrick O. Glauner
  • Published 2015
  • Computer Science
  • ArXiv
  • This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 12 CITATIONS

    Comparison study of neural network and deep neural network on repricing GAP prediction in Indonesian conventional public bank

    VIEW 1 EXCERPT
    CITES METHODS

    Comparison study of neural network and deep neural network on repricing GAP prediction in Indonesian conventional public bank

    VIEW 1 EXCERPT
    CITES METHODS

    Deep Feature Learning Architectures for Daily Reservoir Inflow Forecasting

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Leaves image synthesis using generative adversarial networks with regularization improvement

    VIEW 1 EXCERPT
    CITES METHODS

    Fault detection for ironmaking process based on stacked denoising autoencoders

    VIEW 1 EXCERPT
    CITES BACKGROUND

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 57 REFERENCES

    Deep Learning: Methods and Applications

    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    Improving deep neural network acoustic models using generalized maxout networks

    Deep maxout networks for low-resource speech recognition