• Publications
  • Influence
A Bayesian framework for word segmentation: Exploring the effects of context
TLDR
The results indicate that taking context into account is important for a statistical word segmentation strategy to be successful, and raise the possibility that even young infants may be able to exploit more subtle statistical patterns than have usually been considered. Expand
A fully Bayesian approach to unsupervised part-of-speech tagging
TLDR
This model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. Expand
Adaptor Grammars: A Framework for Specifying Compositional Nonparametric Bayesian Models
TLDR
This paper presents a general-purpose inference algorithm for adaptor grammars, making it easy to define and use such models, and illustrates how several existing nonparametric Bayesian models can be expressed within this framework. Expand
Inducing Probabilistic CCG Grammars from Logical Form with Higher-Order Unification
TLDR
This paper uses higher-order unification to define a hypothesis space containing all grammars consistent with the training data, and develops an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a log-linear parsing model. Expand
Learning OT constraint rankings using a maximum entropy model
A weakness of standard Optimality Theory is its inability to account for grammars with free variation. We describe here the Maximum Entropy model, a general statistical model, and show how it can beExpand
Bayesian Inference for PCFGs via Markov Chain Monte Carlo
This paper presents two Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference of probabilistic context free grammars (PCFGs) from terminal strings, providing an alternative toExpand
Interpolating between types and tokens by estimating power-law generators
TLDR
It is shown that taking a particular stochastic process - the Pitman-Yor process - as an adaptor justifies the appearance of type frequencies in formal analyses of natural language, and improves the performance of a model for unsupervised learning of morphology. Expand
Contextual Dependencies in Unsupervised Word Segmentation
TLDR
Two new Bayesian word segmentation methods are proposed that assume unigram and bigram models of word dependencies respectively, and the bigram model greatly outperforms the unigrams model (and previous probabilistic models), demonstrating the importance of such dependencies forword segmentation. Expand
Lexical Generalization in CCG Grammar Induction for Semantic Parsing
TLDR
An algorithm for learning factored CCG lexicons, along with a probabilistic parse-selection model, which includes both lexemes to model word meaning and templates to model systematic variation in word usage are presented. Expand
Two Decades of Unsupervised POS Induction: How Far Have We Come?
TLDR
It is shown that some of the oldest (and simplest) systems stand up surprisingly well against more recent approaches, and the idea of evaluating systems based on their ability to produce cluster prototypes that are useful as input to a prototype-driven learner is introduced. Expand
...
1
2
3
4
5
...