Corpus ID: 221090443

Navigating Language Models with Synthetic Agents

@article{Feldman2020NavigatingLM,
  title={Navigating Language Models with Synthetic Agents},
  author={Philip G. Feldman},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.04162}
}
Modern natural language models such as the GPT-2/GPT-3 contain tremendous amounts of information about human belief in a consistently interrogatable form. If these models could be shown to accurately reflect the underlying beliefs of the human beings that produced the data used to train these models, then such models become a powerful sociological tool in ways that are distinct from traditional methods, such as interviews and surveys. In this study, We train a version of the GPT-2 on a corpora… Expand

Figures and Tables from this paper

Analyzing COVID-19 Tweets with Transformer-based Language Models
TLDR
The results on the COVID-19 tweet data show that transformer language models are promising tools that can help us understand public opinions on social media at scale. Expand

References

SHOWING 1-10 OF 24 REFERENCES
BERTology Meets Biology: Interpreting Attention in Protein Language Models
TLDR
The inner workings of the Transformer is analyzed and it is shown that attention captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure. Expand
A snapshot of the frontiers of fairness in machine learning
A group of industry, academic, and government experts convene in Philadelphia to explore the roots of algorithmic bias.
Animals in Virtual Environments
TLDR
This review provides an overview of animal behavior experiments conducted in virtual environments and indicates that VE for animals is becoming a widely used application of XR technology but such applications have not previously been reported in the technical literature related to XR. Expand
Language Models are Few-Shot Learners
TLDR
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. Expand
Lessons from archives: strategies for collecting sociocultural data in machine learning
TLDR
It is argued that a new specialization should be formed within ML that is focused on methodologies for data collection and annotation: efforts that require institutional frameworks and procedures for sociocultural data collection. Expand
The Curious Case of Neural Text Degeneration
TLDR
By sampling text from the dynamic nucleus of the probability distribution, which allows for diversity while effectively truncating the less reliable tail of the distribution, the resulting text better demonstrates the quality of human text, yielding enhanced diversity without sacrificing fluency and coherence. Expand
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand
Dynamic Knowledge Graph Construction for Zero-shot Commonsense Question Answering
TLDR
Empirical results on the SocialIQa and StoryCommonsense datasets in a zero-shot setting demonstrate that using commonsense knowledge models to dynamically construct and reason over knowledge graphs achieves performance boosts over pre-trained language models and usingknowledge models to directly evaluate answers. Expand
HuggingFace's Transformers: State-of-the-art Natural Language Processing
TLDR
The \textit{Transformers} library is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community. Expand
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Expand
...
1
2
3
...