Corpus ID: 51970765

Fast, Better Training Trick - Random Gradient

@article{Wei2018FastBT,
  title={Fast, Better Training Trick - Random Gradient},
  author={Jiakai Wei},
  journal={ArXiv},
  year={2018},
  volume={abs/1808.04293}
}
  • Jiakai Wei
  • Published 2018
  • Mathematics, Computer Science
  • ArXiv
  • In this paper, we will show an unprecedented method to accelerate training and improve performance, which called random gradient (RG). This method can be easier to the training of any model without extra calculation cost, we use Image classification, Semantic segmentation, and GANs to confirm this method can improve speed which is training model in computer vision. The central idea is using the loss multiplied by a random number to random reduce the back-propagation gradient. We can use this… CONTINUE READING
    1 Citations

    Figures, Tables, and Topics from this paper

    Forget the Learning Rate, Decay Loss
    • 2
    • PDF

    References

    SHOWING 1-10 OF 46 REFERENCES
    Deep learning via Hessian-free optimization
    • 728
    • Highly Influential
    • PDF
    Large Batch Training of Convolutional Networks
    • 180
    • PDF
    Super-convergence: very fast training of neural networks using large learning rates
    • 183
    • PDF
    Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour
    • 1,464
    • PDF
    Improved Techniques for Training GANs
    • 4,055
    • PDF
    ImageNet classification with deep convolutional neural networks
    • 59,651
    • PDF
    Deep Residual Learning for Image Recognition
    • 60,095
    • Highly Influential
    • PDF
    Understanding Convolution for Semantic Segmentation
    • 568
    • PDF
    Don't Decay the Learning Rate, Increase the Batch Size
    • 453
    • PDF
    Learning Multiple Layers of Features from Tiny Images
    • 10,504
    • PDF