Corpus ID: 17299272

Neural Network Regularization via Robust Weight Factorization

@article{Rudy2014NeuralNR,
  title={Neural Network Regularization via Robust Weight Factorization},
  author={Jan Rudy and Weiguang Ding and D. Im and Graham W. Taylor},
  journal={ArXiv},
  year={2014},
  volume={abs/1412.6630}
}
  • Jan Rudy, Weiguang Ding, +1 author Graham W. Taylor
  • Published 2014
  • Mathematics, Computer Science
  • ArXiv
  • Regularization is essential when training large neural networks. As deep neural networks can be mathematically interpreted as universal function approximators, they are effective at memorizing sampling noise in the training data. This results in poor generalization to unseen data. Therefore, it is no surprise that a new regularization technique, Dropout, was partially responsible for the now-ubiquitous winning entry to ImageNet 2012 by the University of Toronto. Currently, Dropout (and related… CONTINUE READING
    5 Citations
    Supervised semi-autoencoder learning for multi-layered neural networks
    • R. Kamimura, H. Takeuchi
    • Computer Science
    • 2017 Joint 17th World Congress of International Fuzzy Systems Association and 9th International Conference on Soft Computing and Intelligent Systems (IFSA-SCIS)
    • 2017
    Unsupervised Learning in Synaptic Sampling Machines
    • 11
    • PDF
    Repeated potentiality assimilation: Simplifying learning procedures by positive, independent and indirect operation for improving generalization and interpretation
    • R. Kamimura
    • Computer Science
    • 2016 International Joint Conference on Neural Networks (IJCNN)
    • 2016
    • 10

    References

    SHOWING 1-10 OF 38 REFERENCES
    Dropout Training as Adaptive Regularization
    • 399
    • Highly Influential
    • PDF
    Training with Noise is Equivalent to Tikhonov Regularization
    • 787
    • PDF
    Regularization of Neural Networks using DropConnect
    • 1,727
    • PDF
    Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion
    • 4,521
    • PDF
    Extracting and composing robust features with denoising autoencoders
    • 4,156
    • Highly Influential
    • PDF
    On the importance of initialization and momentum in deep learning
    • 2,582
    • Highly Influential
    • PDF
    Sparse Feature Learning for Deep Belief Networks
    • 737
    • PDF
    Maxout Networks
    • 1,576
    • Highly Influential
    • PDF