Inducing Positive Perspectives with Text Reframing

@inproceedings{Ziems2022InducingPP,
  title={Inducing Positive Perspectives with Text Reframing},
  author={Caleb Ziems and Minzhi Li and Anthony Zhang and Diyi Yang},
  booktitle={ACL},
  year={2022}
}
Sentiment transfer is one popular example of a text style transfer task, where the goal is to reverse the sentiment polarity of a text. With a sentiment reversal comes also a reversal in meaning. We introduce a different but related task called positive reframing in which we neutralize a negative point of view and generate a more positive perspective for the author without contradicting the original meaning. Our insistence on meaning preservation makes positive reframing a challenging and… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 117 REFERENCES
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction
Unconscious biases continue to be prevalent in modern text and media, calling for algorithms that can assist writers with bias correction. For example, a female character in a story is often
Parallel Data Augmentation for Formality Style Transfer
TLDR
Experiments demonstrate that the augmented parallel data largely helps improve formality style transfer when it is used to pre-train the model, leading to the state-of-the-art results in the GYAFC benchmark dataset.
Politeness Transfer: A Tag and Generate Approach
TLDR
This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning, and designs a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content.
Automatically Neutralizing Subjective Bias in Text
TLDR
Large-scale human evaluation across four domains (encyclopedias, news headlines, books, and political speeches) suggests that these algorithms are a first step towards the automatic identification and reduction of bias.
Semi-supervised Text Style Transfer: Cross Projection in Latent Space
TLDR
A semi-supervised text style transfer model that combines the small-scale parallel data with the large-scale nonparallel data is proposed and a projection function between the latent space of different styles is introduced.
XLNet: Generalized Autoregressive Pretraining for Language Understanding
TLDR
XLNet is proposed, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes the limitations of BERT thanks to its autore progressive formulation.
The Far-Reaching Effects of Believing People Can Change: Implicit Theories of Personality Shape Stress, Health, and Achievement During Adolescence
The belief that personality is fixed (anentity theoryof personality) can give rise to negative reactions to social adversities. Three studies showed that when social adversity is common—at the
Integrating Positive Psychology into Counseling: Why and (When Appropriate) How.
In recent years, the study of human strengths (termed positive psychology) has enjoyed wider popularity. Positive psychology includes, among other topics, the study of subjective experiences (e.g.,
BERTScore: Evaluating Text Generation with BERT
TLDR
This work proposes BERTScore, an automatic evaluation metric for text generation that correlates better with human judgments and provides stronger model selection performance than existing metrics.
...
...