Corpus ID: 54447619

Model Compression with Generative Adversarial Networks

@article{Liu2018ModelCW,
  title={Model Compression with Generative Adversarial Networks},
  author={Ruishan Liu and Nicol{\'o} Fusi and Lester Mackey},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.02271}
}
  • Ruishan Liu, Nicoló Fusi, Lester Mackey
  • Published 2018
  • Computer Science, Mathematics
  • ArXiv
  • More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory-constrained devices. Model compression (also known as distillation) alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. However, when fresh data is unavailable for the compression task, the teacher's training data is typically reused, leading to suboptimal… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-4 OF 4 CITATIONS

    Self-Supervised GAN Compression

    VIEW 3 EXCERPTS
    CITES BACKGROUND

    Classification Accuracy Score for Conditional Generative Models

    VIEW 1 EXCERPT
    CITES METHODS

    Learning GANs and Ensembles Using Discrepancy

    VIEW 1 EXCERPT
    CITES METHODS

    Knowledge Distillation: A Survey

    VIEW 5 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 42 REFERENCES

    Improved Techniques for Training GANs

    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    Lossless (and Lossy) Compression of Random Forests

    VIEW 2 EXCERPTS

    Generative Adversarial Nets

    VIEW 3 EXCERPTS

    Compressing Random Forests

    VIEW 2 EXCERPTS