Corpus ID: 227275587

TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning

@inproceedings{Cai2020TinyTLRM,
  title={TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning},
  author={H. Cai and Chuang Gan and Ligeng Zhu and Song Han},
  booktitle={NeurIPS},
  year={2020}
}
  • H. Cai, Chuang Gan, +1 author Song Han
  • Published in NeurIPS 2020
  • Computer Science
  • On-device learning enables edge devices to continually adapt the AI models to new data, which requires a small memory footprint to fit the tight memory constraint of edge devices. Existing work solves this problem by reducing the number of trainable parameters. However, this doesn't directly translate to memory saving since the major bottleneck is the activations, not parameters. In this work, we present Tiny-Transfer-Learning (TinyTL) for memory-efficient on-device learning. TinyTL freezes the… CONTINUE READING

    Figures and Tables from this paper

    References

    SHOWING 1-10 OF 60 REFERENCES
    Training Deep Nets with Sublinear Memory Cost
    • 263
    • PDF
    ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
    • 638
    • PDF
    MnasNet: Platform-Aware Neural Architecture Search for Mobile
    • 830
    • Highly Influential
    • PDF
    Learning both Weights and Connections for Efficient Neural Network
    • 2,880
    • PDF
    HAQ: Hardware-Aware Automated Quantization
    • 55
    • PDF
    K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning
    • 19
    • PDF
    Dynamic Sparse Graph for Efficient Deep Learning
    • 17
    • Highly Influential
    • PDF
    Improving the speed of neural networks on CPUs
    • 592
    • PDF
    Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
    • 4,207
    • Highly Influential
    • PDF