Maximum Entropy Markov Models for Information Extraction and Segmentation

Abstract

Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We present positive experimental results on the segmentation of FAQ’s.

View Slides

Extracted Key Phrases

2 Figures and Tables

050100'01'03'05'07'09'11'13'15'17
Citations per Year

1,247 Citations

Semantic Scholar estimates that this publication has 1,247 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{McCallum2000MaximumEM, title={Maximum Entropy Markov Models for Information Extraction and Segmentation}, author={Andrew McCallum and Dayne Freitag and Fernando Pereira}, booktitle={ICML}, year={2000} }