Human Heuristics for AI-Generated Language Are Flawed

@article{Jakesch2022HumanHF,
  title={Human Heuristics for AI-Generated Language Are Flawed},
  author={Maurice Jakesch and Jeffrey T. Hancock and Mor Naaman},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.07271}
}
Human communication is increasingly intermixed with language generated by AI. Across chat, email, and social media, AI systems produce smart replies, autocompletes, and translations. AI-generated language is often not identified as such but poses as human language, raising concerns about novel forms of deception and manipulation. Here, we study how humans discern whether one of the most personal and consequential forms of language – a self-presentation – was generated by AI. In six experiments… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 32 REFERENCES
Is GPT-3 Text Indistinguishable from Human Text? Scarecrow: A Framework for Scrutinizing Machine Text
TLDR
This work proposes a new framework called Scarecrow for scrutinizing machine text via crowd annotation, and quantifies measurable gaps between human authored text and generations from models of several sizes, including fourteen configurations of GPT-3.
Unifying Human and Statistical Evaluation for Natural Language Generation
TLDR
This paper proposes a unified framework which evaluates both diversity and quality, based on the optimal error rate of predicting whether a sentence is human- or machine-generated, called HUSE, which is efficiently estimated by combining human and statistical evaluation.
Neural Language Models are Effective Plagiarists
TLDR
It is found that a student using GPT-J can complete introductory level programming assignments without triggering suspicion from MOSS, a widely used plagiarism detection tool.
All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation
Abstract Online misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) such as GPT-2 mean that actors
Lying Words: Predicting Deception from Linguistic Styles
TLDR
The current project investigated the features of linguistic style that distinguish between true and false stories, and found that liars showed lower cognitive complexity, used fewer self-references and other- References, and used more negative emotion words than truth-tellers.
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations
TLDR
A research agenda around AI-MC should consider the design of these technologies and the psychological, linguistic, relational, policy and ethical implications of introducing AI into human–human communication.
GLTR: Statistical Detection and Visualization of Generated Text
TLDR
This work introduces GLTR, a tool to support humans in detecting whether a text was generated by a model, and shows that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training.
Language Models are Few-Shot Learners
TLDR
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.
Attention is All you Need
TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
...
...