Learn More
In this paper, we propose a novel neu-ral network model called RNN Encoder– Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and de-coder of the proposed model are jointly(More)
This paper describes the three systems developed by the LIUM for the IWSLT 2011 evaluation campaign. We participated in three of the proposed tasks, namely the Automatic Speech Recognition task (ASR), the ASR system combination task (ASR_SC) and the Spoken Language Translation task (SLT), since these tasks are all related to speech translation. We present(More)
Recent works on end-to-end neural network-based architectures for machine translation have shown promising results for English-French and English-German translation. Unlike these language pairs, however, in the majority of scenarios, there is a lack of high quality parallel corpora. In this work, we focus on applying neural machine translation to(More)
We present novel features designed with a deep neural network for Machine Translation (MT) Quality Estimation (QE). The features are learned with a Continuous Space Language Model to estimate the probabilities of the source and target segments. These new features, along with standard MT system-independent features, are benchmarked on a series of datasets(More)
We describe our systems for Tasks 1 and 2 of the WMT15 Shared Task on Quality Estimation. Our submissions use (i) a continuous space language model to extract additional features for Task 1 (SHEF-GP, SHEF-SVM), (ii) a continuous bag-of-words model to produce word embed-dings as features for Task 2 (SHEF-W2V) and (iii) a combination of features produced by(More)
This paper presents the systems developed by LIUM and CVC for the WMT16 Mul-timodal Machine Translation challenge. We explored various comparative methods , namely phrase-based systems and at-tentional recurrent neural networks models trained using monomodal or multi-modal data. We also performed a human evaluation in order to estimate the usefulness of(More)
This paper describes our systems for Task 1 of the WMT16 Shared Task on Quality Estimation. Our submissions use (i) a continuous space language model (CSLM) to extract sentence embeddings and cross-entropy scores, (ii) a neural network machine translation (NMT) model, (iii) a set of QuEst features, and (iv) a combination of features produced by QuEst and(More)