Counterfactual Story Reasoning and Generation

@inproceedings{Qin2019CounterfactualSR,
  title={Counterfactual Story Reasoning and Generation},
  author={Lianhui Qin and Antoine Bosselut and Ari Holtzman and Chandra Bhagavatula and Elizabeth Clark and Yejin Choi},
  booktitle={EMNLP},
  year={2019}
}
Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes. [...] Key Method Additionally, we include 80,115 counterfactual "branches" without a rewritten storyline to support future work on semi- or un-supervised approaches to counterfactual story rewriting. Finally, we evaluate the counterfactual rewriting capacities of several competitive baselines based on pretrained language models, and assess whether common overlap…Expand
Sketch and Customize: A Counterfactual Story Generator
TLDR
A sketch-and-customize generation model guided by the causality implicated in the conditions and endings is proposed, which generates much better endings, as compared with the traditional sequence-to-sequence model. Expand
SemEval-2020 Task 5: Counterfactual Recognition
TLDR
This task provides a benchmark for counterfactual recognition in natural language with two subtasks, and requires the participating systems to extract the antecedent and consequent in a givencounterfactual statement. Expand
Generating Hypothetical Events for Abductive Inference
TLDR
This work proposes a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by considering different possible events emerging from candidate hypotheses – events generated by LMI – and selecting the one that is most similar to the observed outcome. Expand
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
TLDR
This paper investigates multiple ways to automatically generate rationales using pre-trained language models, neural knowledge models, and distant supervision from related tasks, and trains generative models capable of composing explanatory rationales for unseen instances. Expand
Evaluating Explanations for Reading Comprehension with Realistic Counterfactuals
TLDR
This analysis suggests that pairwise explanation techniques are better suited to RC than token-level attributions, which are often unfaithful in the scenarios the authors consider, and proposes an improvement to an attention-based attribution technique, resulting in explanations which better reveal the model’s behavior. Expand
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals
When a model attribution technique highlights a particular part of the input, a user might understand this highlight as making a statement about counterfactuals (Miller, 2019): if that part of theExpand
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models
TLDR
Polyjuice is presented, a general-purpose counterfactual generator that allows for control over perturbation types and locations, trained by finetuning GPT-2 on multiple datasets of paired sentences. Expand
TIMEDIAL: Temporal Commonsense Reasoning in Dialog
TLDR
This paper presents the first study to investigate pre-trained LMs for their temporal reasoning capabilities in dialogs by introducing a new task and a crowd-sourced English challenge set, TIMEDIAL, and reveals that the models fail to reason about dialog context correctly; instead, they rely on shallow cues based on existing temporal patterns in context. Expand
Social Commonsense Reasoning with Multi-Head Knowledge Attention
TLDR
This work proposes a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell, and is the first to demonstrate that a model that learns to perform counterfactual reasoning helps predicting the best explanation in an abductive reasoning task. Expand
“I’m Not Mad”: Commonsense Implications of Negation and Contradiction
TLDR
This paper introduces ANION, a new commonsense knowledge graph with 624K if-then rules focusing on negated and contradictory events, and presents joint generative and discriminative inference models for this new resource, providing novel empirical insights on how logical negations and commonsense contradictions reshape the commonsense implications of their original premises. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 41 REFERENCES
A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories
TLDR
A new framework for evaluating story understanding and script learning: the 'Story Cloze Test', which requires a system to choose the correct ending to a four-sentence story, and a new corpus of ~50k five- Sentence commonsense stories, ROCStories, to enable this evaluation. Expand
Recognizing Counterfactual Thinking in Social Media Texts
TLDR
Acounterfactual tweet dataset is created and approaches for detecting counterfactuals using rule-based and supervised statistical approaches are explored, finding a combined rule- based and statistical approach yielded the best results. Expand
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
TLDR
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation. Expand
SemEval-2012 Task 7: Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning
TLDR
The two systems that competed in this task as part of SemEval-2012 are described, and their results are compared to those achieved in previously published research. Expand
Toward a Useful Concept of Causality for Lexical Semantics
TLDR
The notion of'causal complex' is introduced for a complete set of events and conditions necessary for the causal consequent to occur, and the term 'cause' is used for the makeshift, nonmonotonic notion the authors require for everyday tasks such as planning and language understanding. Expand
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
TLDR
This paper introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning, and proposes Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. Expand
Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback
TLDR
This work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data, and devise an easy-to-use interface to collect human feedback on semantic parses. Expand
A Theme-Rewriting Approach for Generating Algebra Word Problems
TLDR
This paper presents a text generation method called It rewriting, which edits existing human-authored narratives to change their theme without changing the underlying story, and applies it to math word problems, where it might help students stay more engaged by quickly transforming all of their homework assignments to the theme of their favorite movie without changes the math concepts that are being taught. Expand
Statistical Script Learning with Multi-Argument Events
TLDR
Experiments on a large corpus using the task of inferring held-out events and the “narrative cloze evaluation” demonstrate that modeling multi-argument events improves predictive accuracy. Expand
Mental models and counterfactual thoughts about what might have been
  • R. Byrne
  • Psychology, Medicine
  • Trends in Cognitive Sciences
  • 2002
TLDR
People show remarkable regularities in the aspects of the past they mentally 'undo' in their counterfactual thoughts, which provide clues about their mental representations and cognitive processes. Expand
...
1
2
3
4
5
...