ANA at SemEval-2020 Task 4: MUlti-task learNIng for cOmmonsense reasoNing (UNION)

@article{Perumal2020ANAAS,
  title={ANA at SemEval-2020 Task 4: MUlti-task learNIng for cOmmonsense reasoNing (UNION)},
  author={Anandh Perumal and Chenyang Huang and Amine Trabelsi and Osmar R Zaiane},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.16403}
}
In this paper, we describe our mUlti-task learNIng for cOmmonsense reasoNing (UNION) system submitted for Task C of the SemEval2020 Task 4, which is to generate a reason explaining why a given false statement is non-sensical. However, we found in the early experiments that simple adaptations such as fine-tuning GPT2 often yield dull and non-informative generations (e.g. simple negations). In order to generate more meaningful explanations, we propose UNION, a unified end-to-end framework, to… 
SemEval-2020 Task 4: Commonsense Validation and Explanation
In this paper, we present SemEval-2020 Task 4,CommonsenseValidation andExplanation(ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish anatural language
Reconstructing Implicit Knowledge with Language Models
TLDR
Manual and automatic evaluation of the generations shows that by refining language models as proposed, they can generate coherent and grammatically sound sentences that explicate implicit knowledge which connects sentence pairs in texts – on both in-domain and out-of-domain test data.
A Bag of Tricks for Dialogue Summarization
TLDR
This work uses a pretrained sequence-to-sequence language model to explore speaker name substitution, negation scope highlighting, multi-task learning with relevant tasks, and pretraining on in-domain data to improve summarization performance.

References

SHOWING 1-10 OF 25 REFERENCES
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
TLDR
This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework.
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
TLDR
This paper introduces the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning, and proposes Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data.
WINOGRANDE: An Adversarial Winograd Schema Challenge at Scale
TLDR
This work introduces WinoGrande, a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset, and establishes new state-of-the-art results on five related benchmarks.
Multi-Task Deep Neural Networks for Natural Language Understanding
TLDR
A Multi-Task Deep Neural Network (MT-DNN) for learning representations across multiple natural language understanding (NLU) tasks that allows domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
TLDR
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering
TLDR
A new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject, and oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts.
Machine Common Sense Concept Paper
TLDR
Two diverse strategies for focusing development on two different machine commonsense services are discussed: a service that learns from experience, like a child, to construct computational models that mimic the core domains of child cognition for objects, agents, and places, and a commonsense knowledge repository capable of answering natural language and image-based questions about commonsense phenomena.
Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation
TLDR
A benchmark to directly test whether a system can differentiate natural language statements that make sense from those that do not make sense is released and models trained over large-scale language modeling tasks as well as human performance are evaluated, showing that there are different challenges for system sense-making.
...
...