Inducing Features of Random Fields

  • Stephen Della Pietray, Vincent Della Pietray
  • Published 1995

Abstract

We present a technique for constructing random elds from a set of training samples. The learning paradigm builds increasingly complex elds by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the eld and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random eld models and techniques introduced in this paper di er from those common to much of the computer vision literature in that the underlying random elds are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches including decision trees and Boltzmann machines are given. As a demonstration of the method, we describe its application to the problem of automatic word classi cation in natural language processing.

01020'96'98'00'02'04'06'08'10'12'14'16
Citations per Year

182 Citations

Semantic Scholar estimates that this publication has 182 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Pietray1995InducingFO, title={Inducing Features of Random Fields}, author={Stephen Della Pietray and Vincent Della Pietray}, year={1995} }