• Computer Science, Mathematics
  • Published 2018

Full deep neural network training on a pruned weight budget

@inproceedings{Golub2018FullDN,
  title={Full deep neural network training on a pruned weight budget},
  author={Maximilian Golub and Guy Lemieux and Mieszko Lis},
  year={2018}
}
We introduce a DNN training technique that learns only a fraction of the full parameter set without incurring an accuracy penalty. To do this, our algorithm constrains the total number of weights updated during backpropagation to those with the highest total gradients. The remaining weights are not tracked, and their initial value is regenerated at every access to avoid storing them in memory. This can dramatically reduce the number of off-chip memory accesses during both training and inference… CONTINUE READING

References

Publications referenced by this paper.
SHOWING 1-10 OF 58 REFERENCES

Measuring the Intrinsic Dimension of Objective Landscapes

VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

DSD: Dense-Sparse-Dense Training for Deep Neural Networks

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Variational Dropout and the Local Reparameterization Trick

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Learning Multiple Layers of Features from Tiny Images

VIEW 7 EXCERPTS
HIGHLY INFLUENTIAL

Xorshift RNGs

  • G. Marsaglia
  • Journal of Statistical Software,
  • 2003
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Compressing deep neural networks for efficient visual inference

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Designing EnergyEfficient Convolutional Neural Networks Using EnergyAware Pruning

  • Yang, T.-J, Chen, Y.-H, V. Sze
  • 2017
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Domain-Adaptive Deep Network Compression

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Learning Efficient Convolutional Networks through Network Slimming

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL