PIQA: Reasoning about Physical Commonsense in Natural Language

@inproceedings{Bisk2020PIQARA,
  title={PIQA: Reasoning about Physical Commonsense in Natural Language},
  author={Yonatan Bisk and Rowan Zellers and Ronan Le Bras and Jianfeng Gao and Yejin Choi},
  booktitle={AAAI},
  year={2020}
}
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains – such as news articles and encyclopedia entries, where text is plentiful – in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably… Expand
COM2SENSE: A Commonsense Reasoning Benchmark with Complementary Sentences
RiddleSense: Answering Riddle Questions as Commonsense Reasoning
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
Go Beyond Plain Fine-Tuning: Improving Pretrained Models for Social Commonsense
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
Verb Physics: Relative Physical Knowledge of Actions and Objects
Social IQA: Commonsense Reasoning about Social Interactions
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
Annotation Artifacts in Natural Language Inference Data
From Recognition to Cognition: Visual Commonsense Reasoning
SQuAD: 100, 000+ Questions for Machine Comprehension of Text
WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale
Language Models as Knowledge Bases?
...
1
2
3
4
...