• Publications
  • Influence
Painless Unsupervised Learning with Features
TLDR
This work shows how features can easily be added to standard generative models for unsupervised learning, without requiring complex new training methods, and applies this technique to part-of-speech induction, grammar induction, word alignment, and word segmentation. Expand
Supervised Learning of Complete Morphological Paradigms
We describe a supervised approach to predicting the set of all inflected forms of a lexical item. Our system automatically acquires the orthographic transformation rules of morphological paradigmsExpand
Why Generative Phrase Models Underperform Surface Heuristics
TLDR
A simple generative, phrase-based model is proposed and verified that its estimates are inferior to those given by surface statistics, and it is shown that interpolation of the two methods can result in a modest increase in BLEU score. Expand
Adding Interpretable Attention to Neural Translation Models Improves Word Alignment
TLDR
This work proposes a simple model extension to the Transformer architecture that makes use of its hidden representations and is restricted to attend solely on encoder information to predict the next word, and introduces a novel alignment inference procedure which applies stochastic gradient descent to directly optimize the attention activations towards a given target word. Expand
Sampling Alignment Structure under a Bayesian Translation Model
We describe the first tractable Gibbs sampling procedure for estimating phrase pair frequencies under a probabilistic model of phrase alignment. We propose and evaluate two nonparametric priors thatExpand
Tailoring Word Alignments to Syntactic Machine Translation
TLDR
This work proposes a novel model for unsupervised word alignment which explicitly takes into account target language constituent structure, while retaining the robustness and efficiency of the HMM alignment model. Expand
Better Word Alignments with Supervised ITG Models
TLDR
This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints, including the presentation of a new normal form grammar for canonicalizing derivations, and introduces many-to-one block alignment features, which significantly improve ITG models. Expand
End-to-End Neural Word Alignment Outperforms GIZA++
TLDR
This work presents the first end-to-end neural word alignment method that consistently outperforms GIZA++ on three data sets and repurposes a Transformer model trained for supervised translation to also serve as an unsupervised word alignment model in a manner that is tightly integrated and does not affect translation quality. Expand
The Complexity of Phrase Alignment Problems
TLDR
It is shown that the problem of finding an optimal alignment can be cast as an integer linear program, which provides a simple, declarative approach to Viterbi inference for phrase alignment models that is empirically quite efficient. Expand
Variable-Length Word Encodings for Neural Translation Models
TLDR
This work proposes and compares three variable-length encoding schemes that represent a large vocabulary corpus using a much smaller vocabulary with no loss in information and improves WMT English-French translation performance by up to 1.7 BLEU. Expand
...
1
2
3
4
5
...