• Corpus ID: 47015717

A Simple Method for Commonsense Reasoning

@article{Trinh2018ASM,
  title={A Simple Method for Commonsense Reasoning},
  author={Trieu H. Trinh and Quoc V. Le},
  journal={ArXiv},
  year={2018},
  volume={abs/1806.02847}
}
Commonsense reasoning is a long-standing challenge for deep learning. [] Key Method Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features.

Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning

This paper introduces a new scoring method that casts a plausibility ranking task in a full-text format and leverages the masked language modeling head tuned during the pre-training phase and requires less annotated data than the standard classifier approach to reach equivalent performances.

Shortcutted Commonsense: Data Spuriousness in Deep Learning of Commonsense Reasoning

A study on different prominent benchmarks that involve commonsense reasoning, along a number of key stress experiments, thus seeking to gain insight on whether the models are learning transferable generalizations intrinsic to the problem at stake or just taking advantage of incidental shortcuts in the data items.

Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

A novel commonsense reasoning metric, Non-Replacement Confidence (NRC), which works on PLMs according to the Replaced Token Detection pre-training objective in ELECTRA, and shows that pre-endowed commonsense knowledge, especially for RTD-based PLMs, is essential in downstream reasoning.

Attention Is (not) All You Need for Commonsense Reasoning

A simple re-implementation of BERT for commonsense reasoning is described and it is shown that the attentions produced by BERT can be directly utilized for tasks such as the Pronoun Disambiguation Problem and Winograd Schema Challenge.

It’s All in the Heads: Using Attention Heads as a Baseline for Cross-Lingual Transfer in Commonsense Reasoning

This work designs a simple approach to commonsense reasoning which trains a linear classifier with weights of multi-head attention as features and demonstrates that most of the performance is given by the same small subset of attention heads for all studied languages, which provides evidence of universal reasoning capabilities in multilingual encoders.

QiaoNing at SemEval-2020 Task 4: Commonsense Validation and Explanation System Based on Ensemble of Language Model

Transfer learning is proposed to handle the large amount and diversity of common sense and the inconsistency of task of pre-training and fine-tuning, which will badly hurt the performance of the model.

Combining Knowledge Hunting and Neural Language Models to Solve the Winograd Schema Challenge

This work builds-up on the language model based methods and augment them with a commonsense knowledge hunting (using automatic extraction from text) module and an explicit reasoning module and achieves the state-of-the-art accuracy on the WSC dataset.

Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models

Experimental results demonstrate that pre-training models using the proposed approach followed by fine-tuning achieve significant improvements over previous state-of-the-art models on two commonsense-related benchmarks, including CommonsenseQA and Winograd Schema Challenge.

Explain Yourself! Leveraging Language Models for Commonsense Reasoning

This work collects human explanations for commonsense reasoning in the form of natural language sequences and highlighted annotations in a new dataset called Common Sense Explanations to train language models to automatically generate explanations that can be used during training and inference in a novel Commonsense Auto-Generated Explanation framework.

Teaching Pretrained Models with Commonsense Reasoning: A Preliminary KB-Based Approach

This work proposes a simple yet effective method to teach pretrained models with commonsense reasoning by leveraging the structured knowledge in ConceptNet, the largest commonsense knowledge base (KB).
...

References

SHOWING 1-10 OF 39 REFERENCES

Combing Context and Commonsense Knowledge Through Neural Networks for Solving Winograd Schema Problems

A general framework to combine context and commonsense knowledge for solving the Winograd Schema (WS) and Pronoun Disambiguation Problems (PDP) and two methods to solve the WS and PDP problems are proposed.

Probabilistic Reasoning via Deep Learning: Neural Association Models

Experimental results on several popular datasets derived from WordNet, FreeBase and ConceptNet have demonstrated that both DNNs and RMNNs perform equally well and they can significantly outperform the conventional methods available for these reasoning tasks.

Sequence to Sequence Learning with Neural Networks

This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

Unsupervised Pretraining for Sequence to Sequence Learning

This work presents a general unsupervised learning method to improve the accuracy of sequence to sequence (seq2seq) models by pretraining the weights of the encoder and decoder with the pretrained weights of two language models and then fine-tuned with labeled data.

Broad Context Language Modeling as Reading Comprehension

This work views LAMBADA as a reading comprehension problem and applies comprehension models based on neural networks, finding that neural network readers perform well in cases that involve selecting a name from the context based on dialogue or discourse cues but struggle when coreference resolution or external knowledge is needed.

SQuAD: 100,000+ Questions for Machine Comprehension of Text

A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%).

Universal Language Model Fine-tuning for Text Classification

This work proposes Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduces techniques that are key for fine- Tuning a language model.

Towards Addressing the Winograd Schema Challenge - Building and Using a Semantic Parser and a Knowledge Hunting Module

This paper presents an approach that identifies the knowledge needed to answer a challenge question, hunts down that knowledge from text repositories, and then reasons with machines to come up with the answer.

Deep Contextualized Word Representations

A new type of deep contextualized word representation is introduced that models both complex characteristics of word use and how these uses vary across linguistic contexts, allowing downstream models to mix different types of semi-supervision signals.

Solving Hard Coreference Problems

This paper presents a general coreference resolution system that significantly improves state-of-the-art performance on hard, Winograd-style, pronoun resolution cases, while still performing at the state of the art level on standard coreferenceresolution datasets.