• Publications
  • Influence
Joint Feature Selection in Distributed Stochastic Learning for Large-Scale Discriminative Training in SMT
TLDR
We deploy local features for SCFG-based SMT that can be read off from rules at runtime, and present a learning algorithm that applies l1/l2 regularization for joint feature selection over distributed stochastic learning processes. Expand
  • 65
  • 9
  • PDF
Online adaptation to post-edits for phrase-based statistical machine translation
TLDR
We present an efficient online learning framework for adapting all modules of a phrase-based statistical machine translation system to post-edited translations. Expand
  • 24
  • 2
  • PDF
Multi-Task Learning for Improved Discriminative Training in SMT
TLDR
We present an experimental evaluation of the question whether multi-task learning depends on a “natural” division of data into tasks that balance shared and individual knowledge, or whether its inherent regularization makes multi- task learning a broadly applicable remedy against overfitting. Expand
  • 7
  • 1
  • PDF
Generative and Discriminative Methods for Online Adaptation in SMT
In an online learning protocol, immediate feedback about each example is used to refine the next prediction. We apply this protocol to statistical machine translation for computer-assistedExpand
  • 18
  • PDF
Compact Personalized Models for Neural Machine Translation
TLDR
We propose and compare methods for gradient-based domain adaptation of self-attentive neural machine translation models that achieve high space efficiency, time efficiency, and translation performance by encouraging structured sparsity in the set of offset tensors during learning via group lasso regularization. Expand
  • 15
  • PDF
A user-study on online adaptation of neural machine translation to human post-edits
TLDR
We present the first user study on online adaptation of NMT by adaptation to human post-edits in the domain of patent translation. Expand
  • 13
  • PDF
Response-based Learning for Grounded Machine Translation
TLDR
We propose a novel learning approach for statistical machine translation (SMT) that allows to extract supervision signals for structured learning from an extrinsic response to a translation input. Expand
  • 10
  • PDF
The Heidelberg University English-German translation system for IWSLT 2015
TLDR
We focus on improving a hierarchical phrase-based system by adding large language models and thousands of sparse, lexicalized features tuned on a large in-domain data set. Expand
  • 6
  • PDF
Multi-Task Minimum Error Rate Training for SMT
TLDR
We present experiments on multi-task learning for discriminative training in statistical machine translation (SMT), extending standard minimum-error-rate training by techniques that take advantage of the similarity of related tasks. Expand
  • 6
  • PDF
A Post-editing Interface for Immediate Adaptation in Statistical Machine Translation
TLDR
We present an open source post-editing interface for adaptive statistical MT, which has in-depth monitoring capabilities and excellent expandability, and can facilitate practical studies. Expand
  • 4
  • PDF