Lost in Machine Translation: A Method to Reduce Meaning Loss

@inproceedings{CohnGordon2019LostIM,
  title={Lost in Machine Translation: A Method to Reduce Meaning Loss},
  author={Reuben Cohn-Gordon and Noah D. Goodman},
  booktitle={North American Chapter of the Association for Computational Linguistics},
  year={2019}
}
A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language. However, state-of-the-art systems often fail in this regard, particularly in cases where the source and target languages partition the “meaning space” in different ways. For instance, “I cut my finger.” and “I cut my finger off.” describe different states of the world but are translated to… 

Figures and Tables from this paper

Style-transfer and Paraphrase: Looking for a Sensible Semantic Similarity Metric

It is demonstrated that none of the metrics widely used in the literature is close enough to human judgment in these tasks to be considered a reasonable solution to measure semantic similarity in reformulated texts at the moment.

Neural Machine Translation

A comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant attentional sequence-to-sequence model, recent refinements, alternative architectures and challenges.

Will I Sound like Me? Improving Persona Consistency in Dialogues through Pragmatic Self-Consciousness

Inspired by social cognition and pragmatics, existing dialogue agents are endow with public self-consciousness on the fly through an imaginary listener to enforce dialogue agents to refrain from uttering contradiction and improve consistency of existing dialogue models.

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes

Taking inspiration from social cognition, a generative estimator is used to infer emotion cause words from utterances with no word-level label and a novel method based on pragmatics is introduced to make dialogue models focus on targeted words in the input during generation.

Pragmatics in Grounded Language Learning: Phenomena, Tasks, and Modeling Approaches

People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication. To interact successfully and naturally with people, user-facing artificial

Mutual exclusivity as a challenge for neural networks

Whether or not standard neural architectures have a ME bias is investigated, demonstrating that they lack this learning assumption, and it is shown that their inductive biases are poorly matched to early-phase learning in several standard tasks: machine translation and object recognition.

Public Self-consciousness for Endowing Dialogue Agents with Consistent Persona

This approach, based on the Rational Speech Acts framework, attempts to maintain consistency in an unsupervised manner requiring neither additional annotations nor pretrained external models to improve consistency in dialogue agents.

Mutual exclusivity as a challenge for deep neural networks

Whether or not standard neural architectures have an ME bias is investigated, demonstrating that they lack this learning assumption, and it is demonstrated that their inductive biases are poorly matched to lifelong learning formulations of classification and translation.

M UTUAL EXCLUSIVITY AS A CHALLENGE FOR DEEP NEURAL NETWORKS

  • Computer Science
  • 2019
Whether or not standard neural architectures have an ME bias is investigated, demonstrating that they lack this learning assumption, and it is demonstrated that their inductive biases are poorly matched to lifelong learning formulations of classification and translation.

A practical introduction to the Rational Speech Act modeling framework

A practical introduction to and critical assessment of the Bayesian Rational Speech Act modeling framework is provided, unpacking theoretical foundations, exploring technological innovations, and drawing connections to issues beyond current applications.

References

SHOWING 1-10 OF 14 REFERENCES

A Call for Clarity in Reporting BLEU Scores

Pointing to the success of the parsing community, it is suggested machine translation researchers settle upon the BLEU scheme, which does not allow for user-supplied reference processing, and provide a new tool, SACREBLEU, to facilitate this.

Neural Machine Translation by Jointly Learning to Align and Translate

It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.

Improving Neural Machine Translation Models with Monolingual Data

This work pairs monolingual training data with an automatic back-translation, and can treat it as additional parallel training data, and obtains substantial improvements on the WMT 15 task English German, and for the low-resourced IWSLT 14 task Turkish->English.

Pragmatically Informative Image Captioning with Character-Level Inference

This work combines a neural image captioner with a Rational Speech Acts model to make a system that is pragmatically informative, and finds that the utterance-level effect of referential captions can be obtained with only character-level decisions.

A Convolutional Encoder Model for Neural Machine Translation

A faster and simpler architecture based on a succession of convolutional layers that allows to encode the source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies is presented.

Pragmatic Language Interpretation as Probabilistic Inference

Attention is All you Need

A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Reasoning about Pragmatics with Neural Listeners and Speakers

A model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics, that succeeds 81% of the time in human evaluations on a referring expression game.

Context-Aware Captions from Context-Agnostic Supervision

An inference technique is introduced to produce discriminative context-aware image captions using only generic context-agnostic training data that generates language that uniquely refers to one of two semantically-similar images in the COCO dataset.

Generation and Comprehension of Unambiguous Object Descriptions

This work proposes a method that can generate an unambiguous description of a specific object or region in an image and which can also comprehend or interpret such an expression to infer which object is being described, and shows that this method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene.