Counterfactual Story Reasoning and Generation

@inproceedings{Qin2019CounterfactualSR,
  title={Counterfactual Story Reasoning and Generation},
  author={Lianhui Qin and Antoine Bosselut and Ari Holtzman and Chandra Bhagavatula and Elizabeth Clark and Yejin Choi},
  booktitle={EMNLP},
  year={2019}
}
Counterfactual reasoning requires predicting how alternative events, contrary to what actually happened, might have resulted in different outcomes. [] Key Method Additionally, we include 80,115 counterfactual "branches" without a rewritten storyline to support future work on semi- or un-supervised approaches to counterfactual story rewriting. Finally, we evaluate the counterfactual rewriting capacities of several competitive baselines based on pretrained language models, and assess whether common overlap…

Figures and Tables from this paper

Unsupervised Editing for Counterfactual Stories

EDUCAT, an editing-based unsupervised approach for counterfactual story rewriting that includes a target position detection strategy based on estimating causal effects of the what-if conditions, keeps the causal invariant parts of the story.

Sketch and Customize: A Counterfactual Story Generator

A sketch-and-customize generation model guided by the causality implicated in the conditions and endings is proposed, which generates much better endings, as compared with the traditional sequence-to-sequence model.

Possible Stories: Evaluating Situated Commonsense Reasoning under Multiple Possible Scenarios

This study frames this task by asking multiple questions with the same set of possible endings as candidate answers, given a short story text, and discovers that even current strong pretrained language models struggle to answer the questions consistently.

SemEval-2020 Task 5: Counterfactual Recognition

This task provides a benchmark for counterfactual recognition in natural language with two subtasks, and requires the participating systems to extract the antecedent and consequent in a givencounterfactual statement.

Counterfactual reasoning: Do Language Models need world knowledge for causal inference?

It is found that pre-trained language models are consistently able to override real-world knowledge in counterfactual scenarios, and that this effect is more robust in case of stronger baseline world knowledge—however, it is also found that for most models this effect appears largely to be driven by simple lexical cues.

PASTA: A Dataset for Modeling Participant States in Narratives

The events in a narrative can be understood as a coherent whole via the underlying states of its participants. Often, these participant states are not explicitly mentioned in the narrative, left to

Rethinking the framework constructed by counterfactual functional model

This paper proposes a mild version of the treatment-unit additivity assumption coined as M-TUA based on the damped vibration equation in physics to alleviate the strength of the constraints in the original assumptions with reasonable formal expression.

Generating Hypothetical Events for Abductive Inference

This work proposes a multi-task model MTL to solve the Abductive NLI task, which predicts a plausible explanation by considering different possible events emerging from candidate hypotheses – events generated by LMI – and selecting the one that is most similar to the observed outcome.

Learning to Imagine: Integrating Counterfactual Thinking in Neural Discrete Reasoning

This work devise a Learning to Imagine (L2I) module, which can be seamlessly incorporated into NDR models to perform the imagination of unseen counterfactual and applies the proposed L2I to TAGOP, the state-of-the-art solution on TAT-QA, validating the rationality and effectiveness of this approach.

Counterfactual Recipe Generation: Exploring Compositional Generalization in a Realistic Scenario

This paper investigates whether pretrained language models can perform compositional generalization in a realistic setting: recipe generation, and designs the counterfactual recipe generation task, which asks models to modify a base recipe according to the change of an ingredient.
...

References

SHOWING 1-10 OF 40 REFERENCES

A Corpus and Evaluation Framework for Deeper Understanding of Commonsense Stories

A new framework for evaluating story understanding and script learning: the 'Story Cloze Test', which requires a system to choose the correct ending to a four-sentence story, and a new corpus of ~50k five- Sentence commonsense stories, ROCStories, to enable this evaluation.

Recognizing Counterfactual Thinking in Social Media Texts

Acounterfactual tweet dataset is created and approaches for detecting counterfactuals using rule-based and supervised statistical approaches are explored, finding a combined rule- based and statistical approach yielded the best results.

ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning

Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.

SemEval-2012 Task 7: Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning

The two systems that competed in this task as part of SemEval-2012 are described, and their results are compared to those achieved in previously published research.

Toward a Useful Concept of Causality for Lexical Semantics

The notion of'causal complex' is introduced for a complete set of events and conditions necessary for the causal consequent to occur, and the term 'cause' is used for the makeshift, nonmonotonic notion the authors require for everyday tasks such as planning and language understanding.

SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

This paper introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning, and proposes Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data.

Improving a Neural Semantic Parser by Counterfactual Learning from Human Bandit Feedback

This work is the first to show that semantic parsers can be improved significantly by counterfactual learning from logged human feedback data, and devise an easy-to-use interface to collect human feedback on semantic parses.

A Theme-Rewriting Approach for Generating Algebra Word Problems

This paper presents a text generation method called It rewriting, which edits existing human-authored narratives to change their theme without changing the underlying story, and applies it to math word problems, where it might help students stay more engaged by quickly transforming all of their homework assignments to the theme of their favorite movie without changes the math concepts that are being taught.

Statistical Script Learning with Multi-Argument Events

Experiments on a large corpus using the task of inferring held-out events and the “narrative cloze evaluation” demonstrate that modeling multi-argument events improves predictive accuracy.

Mental models and counterfactual thoughts about what might have been

  • R. Byrne
  • Psychology, Philosophy
    Trends in Cognitive Sciences
  • 2002