• Publications
  • Influence
What do Neural Machine Translation Models Learn about Morphology?
TLDR
This work analyzes the representations learned by neural MT models at various levels of granularity and empirically evaluates the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. Expand
Farasa: A Fast and Furious Segmenter for Arabic
TLDR
Farasa outperforms or is at par with the state-of-the-art Arabic segmenters (Stanford and MADAMIRA), while being more than one order of magnitude faster. Expand
A Joint Sequence Translation Model with Integrated Reordering
TLDR
A novel machine translation model which models translation by a linear sequence of operations which includes not only translation but also reordering operations, and a joint sequence model for the translation and reordering probabilities which is more flexible than standard phrase-based MT. Expand
Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks
TLDR
This paper investigates the quality of vector representations learned at different layers of NMT encoders and finds that higher layers are better at learning semantics while lower layers tend to be better for part-of-speech tagging. Expand
Urdu Word Segmentation
TLDR
This paper discusses how orthographic and linguistic features in Urdu trigger these two problems and employs a hybrid solution that performs an n-gram ranking on top of rule based maximum matching heuristic. Expand
Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation
We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention. We propose a tunable agent which decides the bestExpand
Identifying and Controlling Important Neurons in Neural Machine Translation
TLDR
It is shown experimentally that translation quality depends on the discovered neurons, and how to control NMT translations in predictable ways, by modifying activations of individual neurons. Expand
Findings of the First Shared Task on Machine Translation Robustness
TLDR
The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models’ robustness to noisy input and domain mismatch. Expand
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
TLDR
A comprehensive analysis of neurons and proposes two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation analysis, an unsupervised method to Extract salient neurons w.r.t. the model itself. Expand
Investigating the Usefulness of Generalized Word Representations in SMT
We investigate the use of generalized representations (POS, morphological analysis and word clusters) in phrase-based models and the N-gram-based Operation Sequence Model (OSM). Our integrationExpand
...
1
2
3
4
5
...