Corpus ID: 233210218

Fool Me Twice: Entailment from Wikipedia Gamification

@inproceedings{Eisenschlos2021FoolMT,
  title={Fool Me Twice: Entailment from Wikipedia Gamification},
  author={Julian Martin Eisenschlos and Bhuwan Dhingra and Jannis Bulian and Benjamin Borschinger and Jordan L. Boyd-Graber},
  booktitle={NAACL},
  year={2021}
}
We release FoolMeTwice (FM2 for short), a large dataset of challenging entailment pairs collected through a fun multi-player game. Gamification encourages adversarial examples, drastically lowering the number of examples that can be solved using “shortcuts” compared to other popular entailment datasets. Players are presented with two tasks. The first task asks the player to write a plausible claim based on the evidence from a Wikipedia page. The second one shows two plausible claims written by… Expand

References

SHOWING 1-10 OF 43 REFERENCES
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Towards Debiasing Fact Verification Models
Annotation Artifacts in Natural Language Inference Data
KILT: a Benchmark for Knowledge Intensive Language Tasks
Active Measures: The Secret History of Disinformation and Political Warfare
Adversarial NLI: A New Benchmark for Natural Language Understanding
Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options
Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension
Dense Passage Retrieval for Open-Domain Question Answering
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
...
1
2
3
4
5
...