• Corpus ID: 53593076

Hallucinations in Neural Machine Translation

@inproceedings{Lee2018HallucinationsIN,
  title={Hallucinations in Neural Machine Translation},
  author={Katherine Lee and Orhan Firat and Ashish Agarwal and Clara Fannjiang and David Sussillo},
  year={2018}
}
Neural machine translation (NMT) systems have reached state of the art performance in translating text and are in wide deployment. [] Key Method We describe a method to generate hallucinations and show that many common variations of the NMT architecture are susceptible to them. We study a variety of approaches to reduce the frequency of hallucinations, including data augmentation, dynamical systems and regularization techniques, showing that data augmentation significantly reduces hallucination frequency…

Figures from this paper

The Curious Case of Hallucinations in Neural Machine Translation

This work considers hallucinations under corpus-level noise (without any source perturbation) and demonstrates that two prominent types of natural hallucinations could be generated and explained through specific corpus- level noise patterns.

Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation

It is shown that for preventive settings, previously used methods are largely inadequate, sequence log-probability works best and performs on par with reference-based methods and D E H ALLUCINATOR, a simple method for alleviating hallucinations at test time that significantly reduces the hallucinatory rate.

Domain Robustness in Neural Machine Translation

In experiments on German to English OPUS data, and German to Romansh, a low-resource scenario, it is found that several methods improve domain robustness, reconstruction standing out as a method that not only improves automatic scores, but also shows improvements in a manual assessments of adequacy, albeit at some loss in fluency.

Survey of Hallucination in Natural Language Generation

A broad overview of the research progress and challenges in the hallucination problem in NLG is provided, including task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, and machine translation.

Prevent the Language Model from being Overconfident in Neural Machine Translation

A Margin-based Token-level Objective (MTO) and a MSO to maximize the Margin for preventing the LM from being overconfident are proposed, which improve translation adequacy as well as fluency.

Thinking Hallucination for Video Captioning

A new metric, COAHA (caption object and action hallucination assessment), which assesses the degree of hallucination is proposed, which achieves state-of-the-art performance on the MSR-Video to Text and the Microsoft Research Video Description Corpus datasets, especially by a massive margin in CIDEr score.

Neural Path Hunter: Reducing Hallucination in Dialogue Systems via Path Grounding

This paper proposes Neural Path Hunter which follows a generate-then-refine strategy whereby a generated response is amended using the KG, and leverages a separate token-level fact critic to identify plausible sources of hallucination and retrieves correct entities by crafting a query signal that is propagated over a k-hop subgraph.

Non-Parametric Adaptation for Neural Machine Translation

This work proposes a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low, and combines this with an expressive neural network, allowing the model to extract information from the noisy retrieved context.

Rethinking Data Augmentation for Low-Resource Neural Machine Translation: A Multi-Task Learning Approach

This paper proposes a multi-task DA approach in which new sentence pairs with transformations, such as reversing the order of the target sentence, which produce unfluent target sentences, and shows consistent improvements over the baseline and over DA methods aiming at extending the support of the empirical data distribution.

A Reinforced Generation of Adversarial Examples for Neural Machine Translation

The results show that the method efficiently produces stable attacks with meaning-preserving adversarial examples that could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture.
...

References

SHOWING 1-10 OF 34 REFERENCES

Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

GNMT, Google's Neural Machine Translation system, is presented, which attempts to address many of the weaknesses of conventional phrase-based translation systems and provides a good balance between the flexibility of "character"-delimited models and the efficiency of "word"-delicited models.

Massive Exploration of Neural Machine Translation Architectures

This work presents a large-scale analysis of the sensitivity of NMT architectures to common hyperparameters, and reports empirical results and variance numbers for several hundred experimental runs corresponding to over 250,000 GPU hours on a WMT English to German translation task.

Synthetic and Natural Noise Both Break Neural Machine Translation

It is found that a model based on a character convolutional neural network is able to simultaneously learn representations robust to multiple kinds of noise, including structure-invariant word representations and robust training on noisy texts.

Effective Approaches to Attention-based Neural Machine Translation

A global approach which always attends to all source words and a local one that only looks at a subset of source words at a time are examined, demonstrating the effectiveness of both approaches on the WMT translation tasks between English and German in both directions.

Neural Machine Translation of Rare Words with Subword Units

This paper introduces a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units, and empirically shows that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English-German and English-Russian by 1.3 BLEU.

Achieving Human Parity on Automatic Chinese to English News Translation

It is found that Microsoft's latest neural machine translation system has reached a new state-of-the-art, and that the translation quality is at human parity when compared to professional human translations.

Neural Machine Translation by Jointly Learning to Align and Translate

It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.

Neural Machine Translation in Linear Time

The ByteNet decoder attains state-of-the-art performance on character-level language modelling and outperforms the previous best results obtained with recurrent networks and the latent alignment structure contained in the representations reflects the expected alignment between the tokens.

Visualizing and Understanding Neural Machine Translation

This work proposes to use layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoder-decoder framework and shows that visualization with LRP helps to interpret the internal workings of NMT and analyze translation errors.

Towards Robust Neural Machine Translation

Experimental results on Chinese-English, English-German and English-French translation tasks show that the proposed approaches can not only achieve significant improvements over strong NMT systems but also improve the robustness of NMT models.