Using Maximum Entropy for Text Classification

@inproceedings{Nigam1999UsingME,
  title={Using Maximum Entropy for Text Classification},
  author={Kamal Nigam and John D. Lafferty and Andrew McCallum},
  year={1999}
}
This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. The underlying principle of maximum entropy is that without external knowledge, one should prefer distributions that are uniform. Constraints on the distribution, derived from labeled training data, inform the technique… CONTINUE READING
Highly Influential
This paper has highly influenced 70 other papers. REVIEW HIGHLY INFLUENTIAL CITATIONS
Highly Cited
This paper has 901 citations. REVIEW CITATIONS

Citations

Publications citing this paper.
Showing 1-10 of 575 extracted citations

901 Citations

020406080'99'03'08'13'18
Citations per Year
Semantic Scholar estimates that this publication has 901 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 37 references

An evaluation of statistical approaches to text categorization

  • Yiming Yang
  • Journal of Information Retrieval,
  • 1999
3 Excerpts

Cohen and Yoram Singer . Context - sensitive learningmethods for text categorization

  • W. William
  • SIGIR ' 96 : Proceedings of the Nineteenth Annual…
  • 1999

Cohen and Yoram Singer . Context - sensitive learningmethods for text categorization Learning to extract symbolic knowledge from theWorld Wide Web

  • W. William
  • IEEE Transactions on Pattern Analysis and Machine…
  • 1999

Della Pietra . A maximum entropy approach to natural language processing

  • Adam L. Berger, Stephen A. DellaPietra, J. Veincent
  • 1999

Technical Report CMUCS-99-108

  • Stanley F. Chen, Ronald Rosenfeld. A Gaussian prior for smoothing maximum models
  • Carnegie Mellon University,
  • 1999
1 Excerpt

Similar Papers

Loading similar papers…