• Publications
  • Influence
Universal Adversarial Triggers for Attacking and Analyzing NLP
Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger aExpand
Compositional Questions Do Not Necessitate Multi-hop Reasoning
TLDR
This work introduces a single-hop BERT-based RC model that achieves 67 F1—comparable to state-of-the-art multi-hop models and designs an evaluation setting where humans are not shown all of the necessary paragraphs for the intendedmulti-hop reasoning but can still answer over 80% of questions. Expand
Pretrained Transformers Improve Out-of-Distribution Robustness
TLDR
This work systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts and measures the generalization of previous models, finding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Expand
Pathologies of Neural Models Make Interpretations Difficult
TLDR
This work uses input reduction, which iteratively removes the least important word from the input, to expose pathological behaviors of neural models: the remaining words appear nonsensical to humans and are not the ones determined as important by interpretation methods. Expand
Do NLP Models Know Numbers? Probing Numeracy in Embeddings
TLDR
This work investigates the numerical reasoning capabilities of a state-of-the-art question answering model on the DROP dataset and finds this model excels on questions that require numerical reasoning, i.e., it already captures numeracy. Expand
Eliciting Knowledge from Language Models Using Automatically Generated Prompts
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problemsExpand
Calibrate Before Use: Improving Few-Shot Performance of Language Models
TLDR
This work first estimates the model's bias towards each answer by asking for its prediction when given the training prompt and a content-free test input such as "N/A", and then fits calibration parameters that cause the prediction for this input to be uniform across answers. Expand
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
TLDR
This work introduces AllenNLP Interpret, a flexible framework for interpreting NLP models, which provides interpretation primitives for anyAllenNLP model and task, a suite of built-in interpretation methods, and a library of front-end visualization components. Expand
Extracting Training Data from Large Language Models
TLDR
This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model, and finds that larger models are more vulnerable than smaller models. Expand
Evaluating Models’ Local Decision Boundaries via Contrast Sets
TLDR
A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Expand
...
1
2
3
...