Self-Supervised Knowledge Assimilation for Expert-Layman Text Style Transfer

@inproceedings{Xu2022SelfSupervisedKA,
  title={Self-Supervised Knowledge Assimilation for Expert-Layman Text Style Transfer},
  author={Wenda Xu and Michael Stephen Saxon and Misha Sra and William Yang Wang},
  booktitle={AAAI},
  year={2022}
}
Expert-layman text style transfer technologies have the potential to improve communication between members of scientific communities and the general public. High-quality information produced by experts is often filled with difficult jargon laypeople struggle to understand. This is a particularly notable issue in the medical domain, where layman are often confused by medical text online. At present, two bottlenecks interfere with the goal of building high-quality medical expert-layman style… 

Figures and Tables from this paper

Not All Errors are Equal: Learning Text Generation Metrics using Stratified Error Synthesis

This work introduces SES CORE, a model-based metric that is highly correlated with human judgements without requiring human annota-tion, by utilizing a novel, iterative error synthesis and severity scoring pipeline.

Neuro-Symbolic Procedural Planning with Commonsense Prompting

A neuro-symbolic procedural PLAN ner ( PLAN) is proposed that elicits procedural planning knowledge from the LLMs with commonsense-infused prompting and uses symbolic program executors on the latent procedural representations to formalize prompts from commonsense knowledge bases as a causal intervention toward the Structural Causal Model.

Neuro-Symbolic Causal Language Planning with Commonsense Prompting

A Neuro-Symbolic Causal Language Planner (CLAP) is proposed that elicits procedural knowledge from the LLMs with commonsense-infused prompting to solve the language planning problem in a zero-shot manner.

N EUROS YMBOLIC P ROCEDURAL P LANNING WITH C OMMONSENSE P ROMPTING

  • 2022

References

SHOWING 1-10 OF 52 REFERENCES

Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen

A new task of expertise style transfer is proposed and a manually annotated dataset is contributed with the goal of alleviating cognitive biases and improving the accuracy and expertise level of laymen descriptions using simple words.

Unsupervised Text Style Transfer with Padded Masked Language Models

The experiments on sentence fusion and sentiment transfer demonstrate that MASKER performs competitively in a fully unsupervised setting and improves supervised methods’ accuracy by over 10 percentage points when pre-training them on silver training data generated by MASKER.

Deep Learning for Text Style Transfer: A Survey

A systematic survey of the research on neuralText style transfer, spanning over 100 representative articles since the first neural text style transfer work in 2017, is presented.

Style Transfer as Unsupervised Machine Translation

This paper takes advantage of style-preference information and word embedding similarity to produce pseudo-parallel data with a statistical machine translation (SMT) framework and introduces a style classifier to guarantee the accuracy of style transfer and penalize bad candidates in the generated pseudo data.

Cycle-Consistent Adversarial Autoencoders for Unsupervised Text Style Transfer

This paper proposes a novel neural approach to unsupervised text style transfer which it refers to as Cycle-consistent Adversarial autoEncoders (CAE) trained from non-parallel data that enhances the capacity of the adversarial style transfer networks in content preservation.

Multiple-Attribute Text Rewriting

This paper proposes a new model that controls several factors of variation in textual data where this condition on disentanglement is replaced with a simpler mechanism based on back-translation, and demonstrates that the fully entangled model produces better generations.

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

This article introduces BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora that largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre- trained on biomedical Corpora.

Improving Language Understanding by Generative Pre-Training

The general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, improving upon the state of the art in 9 out of the 12 tasks studied.

SciBERT: A Pretrained Language Model for Scientific Text

SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks and demonstrates statistically significant improvements over BERT.

MASS: Masked Sequence to Sequence Pre-training for Language Generation

This work proposes MAsked Sequence to Sequence pre-training (MASS) for the encoder-decoder based language generation tasks, which achieves the state-of-the-art accuracy on the unsupervised English-French translation, even beating the early attention-based supervised model.
...