HellaSwag: Can a Machine Really Finish Your Sentence?

@inproceedings{Zellers2019HellaSwagCA,
  title={HellaSwag: Can a Machine Really Finish Your Sentence?},
  author={Rowan Zellers and Ari Holtzman and Yonatan Bisk and Ali Farhadi and Yejin Choi},
  booktitle={ACL},
  year={2019}
}
Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference? In this paper, we show that commonsense inference still proves difficult for even state-of-the-art… CONTINUE READING

Citations

Publications citing this paper.
SHOWING 1-10 OF 21 CITATIONS

Counterfactual Story Reasoning and Generation

VIEW 5 EXCERPTS
CITES BACKGROUND & METHODS

Evaluating Commonsense in Pre-trained Language Models

VIEW 6 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

Adversarial Filters of Dataset Biases

VIEW 2 EXCERPTS

A Critical Look at Benchmarking Datasets: Problem of finding relationship between sentences

VIEW 1 EXCERPT

Adversarial NLI: A New Benchmark for Natural Language Understanding

VIEW 2 EXCERPTS
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 21 REFERENCES

Deep contextualized word representations

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Improving Language Understanding by Generative Pre-Training

VIEW 6 EXCERPTS
HIGHLY INFLUENTIAL

Movie Description

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books

VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL

BERT setup We extensively study BERT in this paper, and make no changes to the underlying architecture or pretraining

  • 2019

2018), we train the AF models in a multi-way fashion. Since we use BERT-Large as the discriminator, this matches Devlin et al. (2018)’s model for SWAG

  • c. Similarly to Zellers
  • 2018