Corpus ID: 219981212

Data Efficient Stagewise Knowledge Distillation

@inproceedings{Kulkarni2019DataES,
  title={Data Efficient Stagewise Knowledge Distillation},
  author={Akshay Ravindra Kulkarni and Navid Panchi and Sharath Chandra Raparthy and Shital S. Chiddarwar},
  year={2019}
}
  • Akshay Ravindra Kulkarni, Navid Panchi, +1 author Shital S. Chiddarwar
  • Published 2019
  • Computer Science
  • Despite the success of Deep Learning (DL), the deployment of modern DL models requiring large computational power poses a significant problem for resource-constrained systems. This necessitates building compact networks that reduce computations while preserving performance. Traditional Knowledge Distillation (KD) methods that transfer knowledge from teacher to student (a) use a single-stage and (b) require the whole data set while distilling the knowledge to the student. In this work, we… CONTINUE READING

    Topics from this paper.