• Publications
  • Influence
Social Bias Frames: Reasoning about Social and Power Implications of Language
TLDR
It is found that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias, they are not effective at spelling out more detailed explanations in terms of Social Bias Frames.
The Multilingual Amazon Reviews Corpus
TLDR
The use of mean absolute error (MAE) instead of classification accuracy for this task, since MAE accounts for the ordinal nature of the ratings, is proposed.
Citation Text Generation
TLDR
This paper establishes the task of citation text generation with a standard evaluation corpus and develops several strong baseline models for this task, and provides extensive automatic and human evaluations to illustrate the successes and shortcomings of current text generation techniques.
Computational Text Analysis for Social Science: Model Assumptions and Complexity
TLDR
The spectrum of current methods, which lie on two dimensions: computational and statistical model complexity; and domain assumptions, are surveyed to suggest directions of research to better align new methods with the goals of social scientists.
Etch-a-Sketching: Evaluating the Post-Primary Rhetorical Moderation Hypothesis
Candidates have incentives to present themselves as strong partisans in primary elections, and then move “toward the center” upon advancing to the general election. Yet, candidates also face
Contextual word representations
TLDR
Advances in how programs treat natural language words have a big impact in AI, and this research highlights the need to understand these words in more detail.
Choose Your Own Adventure: Paired Suggestions in Collaborative Writing for Evaluating Story Generation Models
TLDR
This work presents Choose Your Own Adventure, a collaborative writing setup for pairwise model evaluation, where two models generate suggestions to people as they write a short story; writers are asked to choose one of the two suggestions, and they observe which model’s suggestions they prefer.
On Consequentialism and Fairness
TLDR
This paper provides a consequentialist critique of common definitions of fairness within machine learning, as well as a machine learning perspective on consequentialism, which brings to the fore some of the tradeoffs involved.
...
1
2
...