Fast, Better Training Trick - Random Gradient
@article{Wei2018FastBT, title={Fast, Better Training Trick - Random Gradient}, author={Jiakai Wei}, journal={ArXiv}, year={2018}, volume={abs/1808.04293} }
In this paper, we will show an unprecedented method to accelerate training and improve performance, which called random gradient (RG). This method can be easier to the training of any model without extra calculation cost, we use Image classification, Semantic segmentation, and GANs to confirm this method can improve speed which is training model in computer vision. The central idea is using the loss multiplied by a random number to random reduce the back-propagation gradient. We can use this… CONTINUE READING
Figures, Tables, and Topics from this paper
One Citation
References
SHOWING 1-10 OF 46 REFERENCES
Super-convergence: very fast training of neural networks using large learning rates
- Computer Science, Mathematics
- Defense + Commercial Sensing
- 2019
- 183
- PDF
ImageNet classification with deep convolutional neural networks
- Computer Science
- Commun. ACM
- 2012
- 59,651
- PDF
Deep Residual Learning for Image Recognition
- Computer Science
- 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
- 60,095
- Highly Influential
- PDF
Understanding Convolution for Semantic Segmentation
- Computer Science
- 2018 IEEE Winter Conference on Applications of Computer Vision (WACV)
- 2018
- 568
- PDF