Rethinking Text Attribute Transfer: A Lexical Analysis

@article{Fu2019RethinkingTA,
  title={Rethinking Text Attribute Transfer: A Lexical Analysis},
  author={Yao Fu and Hao Zhou and Jiaze Chen and Lei Li},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.12335}
}
  • Yao Fu, Hao Zhou, +1 author Lei Li
  • Published 1 September 2019
  • Computer Science
  • ArXiv
Text attribute transfer is modifying certain linguistic attributes (e.g. sentiment, style, author-ship, etc.) of a sentence and transforming them from one type to another. In this paper, we aim to analyze and interpret what is changed during the transfer process. We start from the observation that in many existing models and datasets, certain words within a sentence play important roles in determining the sentence attribute class. These words are referred as the Pivot Words. Based on these… 
Contextualizing Variation in Text Style Transfer Datasets
TLDR
This paper conducts several empirical analyses of existing text style datasets and proposes a categorization of stylistic and dataset properties to consider when utilizing or comparing text style dataset.
From Theories on Styles to their Transfer in Text: Bridging the Gap with a Hierarchical Survey
TLDR
A comprehensive discussion of the styles that have received attention in the transfer task is provided, organized into a hierarchy, highlighting the challenges for the definition of each of them, and pointing out gaps in the current research landscape.
Exploiting pivot words to classify and summarize discourse facets of scientific papers
TLDR
A new, more effective solution to the CL-SciSumm discourse facet classification task, which entails identifying for each cited text span what facet of the paper it belongs to from a predefined set of facets, and is to extract facet-specific descriptions of each RP consisting of a fixed-length collection of RP’s text spans.
VAE based Text Style Transfer with Pivot Words Enhancement Learning
TLDR
A novel VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER) method which utilizes Variational AutoEncoder (VAE) and external style embeddings to learn semantics and style distribution jointly.
Stylistic Retrieval-based Dialogue System with Unparallel Training Data
TLDR
This paper proposes a flexible framework that adapts a generic retrieval-based dialogue system to mimic the language style of a specified persona without any parallel data, and demonstrates the feasibility of building stylistic dialogue systems by simple data augmentation.
Controllable Story Generation with External Knowledge Using Large-Scale Language Models
TLDR
MEGATRON-CNTRL is a novel framework that uses large-scale language models and adds control to text generation by incorporating an external knowledge base and showcases the controllability of the model by replacing the keywords used to generate stories and re-running the generation process.
Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen
TLDR
A new task of expertise style transfer is proposed and a manually annotated dataset is contributed with the goal of alleviating cognitive biases and improving the accuracy and expertise level of laymen descriptions using simple words.
Latent Template Induction with Gumbel-CRFs
TLDR
This work proposes a Gumbel-CRF, a continuous relaxation of the CRF sampling algorithm using a relaxed Forward-Filtering Backward-Sampling (FFBS) approach, which gives more stable gradients than score-function based estimators and shows that it learns interpretable templates during training, which allows us to control the decoder during testing.
Listener’s Social Identity Matters in Personalised Response Generation
TLDR
It is demonstrated that the listener’s identity indeed matters in the language use of responses and that the response generator can capture such differences in language use.
Language Generation via Combinatorial Constraint Satisfaction: A Tree Search Enhanced Monte-Carlo Approach
TLDR
This work proposes TSMC, an efficient method to generate high likelihood sentences with respect to a pre-trained language model while satisfying the constraints, which is highly flexible, requires no task-specific train- ing, and leverages efficient constraint satisfaction solving techniques.
...
1
2
...

References

SHOWING 1-10 OF 32 REFERENCES
Delete, Retrieve, Generate: a Simple Approach to Sentiment and Style Transfer
TLDR
This paper proposes simpler methods motivated by the observation that text attributes are often marked by distinctive phrases, and the strongest method extracts content words by deleting phrases associated with the sentence’s original attribute value, retrieves new phrases associatedwith the target attribute, and uses a neural model to fluently combine these into a final output.
Multiple-Attribute Text Style Transfer
TLDR
It is shown that this condition is not necessary and is not always met in practice, even with domain adversarial training that explicitly aims at learning disentangled representations, and a new model is proposed where this condition on disentanglement is replaced with a simpler mechanism based on back-translation.
Fightin' Words: Lexical Feature Selection and Evaluation for Identifying the Content of Political Conflict
TLDR
A variety of techniques for selecting words that capture partisan, or other, differences in political speech and for evaluating the relative importance of those words are discussed and several new approaches based on Bayesian shrinkage and regularization are introduced.
Style Transfer Through Back-Translation
TLDR
A latent representation of the input sentence is learned which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties, and adversarial generation techniques are used to make the output match the desired style.
Style Transfer from Non-Parallel Text by Cross-Alignment
TLDR
This paper proposes a method that leverages refined alignment of latent representations to perform style transfer on the basis of non-parallel text, and demonstrates the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.
Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus
TLDR
Experimental results on two different style transfer tasks–sentiment transfer, and formality transfer–show that the proposed reinforcement-learning-based generator-evaluator architecture outperforms state-of-the-art approaches.
QuaSE: Sequence Editing under Quantifiable Guidance
TLDR
This framework explores the pseudo-parallel sentences by modeling their content similarity and outcome differences to enable a better disentanglement of the latent factors, which allows generating an output to better satisfy the desired outcome and keep the content.
Style Transfer in Text: Exploration and Evaluation
TLDR
Two models are explored to learn style transfer with non-parallel data to learn separate content representations and style representations using adversarial networks, and a novel evaluation metrics which measure two aspects of style transfer: transfer strength and content preservation.
Unsupervised Text Style Transfer using Language Models as Discriminators
TLDR
This paper proposes a new technique that uses a target domain language model as the discriminator, providing richer and more stable token-level feedback during the learning process, and shows that this approach leads to improved performance on three tasks: word substitution decipherment, sentiment modification, and related language translation.
Controlling Linguistic Style Aspects in Neural Language Generation
TLDR
The method is based on conditioned RNN language model, where the desired content as well as the stylistic parameters serve as conditioning contexts and is successful in generating coherent sentences corresponding to the required linguistic style and content.
...
1
2
3
4
...