Compressing text classification models

  title={ Compressing text classification models},
  author={Armand Joulin and Edouard Grave and Piotr Bojanowski and Matthijs Douze and Herv{\'e} J{\'e}gou and Tomas Mikolov},
We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings. While the original technique leads to a loss in accuracy, we adapt this method to circumvent quantization artefacts. Our experiments carried out on several benchmarks show that our approach typically… CONTINUE READING
Highly Cited
This paper has 79 citations. REVIEW CITATIONS
Related Discussions
This paper has been referenced on Twitter 28 times. VIEW TWEETS

From This Paper

Figures, tables, and topics from this paper.


Publications citing this paper.
Showing 1-10 of 45 extracted citations

A Non-Euclidean Gradient Descent Framework for Non-Convex Matrix Factorization

IEEE Transactions on Signal Processing • 2018
View 2 Excerpts

80 Citations

Citations per Year
Semantic Scholar estimates that this publication has 80 citations based on the available data.

See our FAQ for additional information.


Publications referenced by this paper.
Showing 1-10 of 40 references

Optimized Product Quantization for Approximate Nearest Neighbor Search

2013 IEEE Conference on Computer Vision and Pattern Recognition • 2013
View 9 Excerpts
Highly Influenced

How should we evaluate supervised hashing?

2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) • 2017
View 1 Excerpt

Similar Papers

Loading similar papers…