ASER: Towards Large-scale Commonsense Knowledge Acquisition via Higher-order Selectional Preference over Eventualities

@article{Zhang2022ASERTL,
  title={ASER: Towards Large-scale Commonsense Knowledge Acquisition via Higher-order Selectional Preference over Eventualities},
  author={Hongming Zhang and Xin Liu and Haojie Pan and Hao Ke and Jiefu Ou and Tianqing Fang and Yangqiu Song},
  journal={Artif. Intell.},
  year={2022},
  volume={309},
  pages={103740}
}
Acquiring and Modelling Abstract Commonsense Knowledge via Conceptualization
TLDR
This work thoroughly study the possible role of conceptualization in commonsense reasoning, formulates a framework to replicate human conceptual induction from acquiring abstract knowledge about abstract concepts, and develops tools for contextualization on ATOMIC, a large-scale human annotated CKG.
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset
Reasoning over commonsense knowledge bases (CSKB) whose elements are in the form of free-text is an important yet hard task in NLP. While CSKB completion only fills the missing links within the
Knowledge-Augmented Methods for Natural Language Processing
TLDR
This tutorial introduces the key steps in integrating knowledge into NLP, including knowledge grounding from text, knowledge representation and fusing, and introduces recent state-of-the-art applications in fusing knowledge into language understanding, language generation and commonsense reasoning.
Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
TLDR
It is found generalization does not improve over the course of pre-training, suggesting that commonsense knowledge is acquired from surface-level, co-occurrence patterns rather than induced, systematic reasoning.

References

SHOWING 1-10 OF 123 REFERENCES
Mining Verb-Oriented Commonsense Knowledge
TLDR
This paper proposes a knowledge-driven approach to mine verb-oriented commonsense knowledge from verb phrases with the help of taxonomy, and designs an entropy-based filter to cope with noisy input verb phrases and proposes a joint model based on minimum description length and a neural language model to generate verb- oriented common-sense knowledge.
DISCOS: Bridging the Gap between Discourse Knowledge and Commonsense Knowledge
TLDR
Experiments demonstrate that the proposed commonsense knowledge acquisition framework DISCOS can successfully convert discourse knowledge about eventualities from ASER, a large-scale discourse knowledge graph, into if-then Commonsense knowledge defined in ATOMIC without any additional annotation effort.
COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
TLDR
It is proposed that manually constructed CSKGs will never achieve the coverage necessary to be applicable in all situations encountered by NLP agents, and a new evaluation framework for testing the utility of KGs based on how effectively implicit knowledge representations can be learned from them is proposed.
SP-10K: A Large-scale Evaluation Set for Selectional Preference Acquisition
TLDR
SP-10K, a large-scale evaluation set that provides human ratings for the plausibility of 10,000 SP pairs over five SP relations, is introduced, covering 2,500 most frequent verbs, nouns, and adjectives in American English.
COMET: Commonsense Transformers for Automatic Knowledge Graph Construction
TLDR
This investigation reveals promising results when implicit knowledge from deep pre-trained language models is transferred to generate explicit knowledge in commonsense knowledge graphs, and suggests that using generative commonsense models for automatic commonsense KB completion could soon be a plausible alternative to extractive methods.
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset
Reasoning over commonsense knowledge bases (CSKB) whose elements are in the form of free-text is an important yet hard task in NLP. While CSKB completion only fills the missing links within the
CoCoLM: COmplex COmmonsense Enhanced Language Model
TLDR
Through the careful training over a large-scale eventuality knowledge graphs ASER, the proposed general language model CoCoLM successfully teaches pre-trained language models (i.e., BERT and RoBERTa) rich complex commonsense knowledge among eventualities.
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
TLDR
Experimental results demonstrate that multitask models that incorporate the hierarchical structure of if-then relation types lead to more accurate inference compared to models trained in isolation, as measured by both automatic and human evaluation.
Commonsense Knowledge Mining from Pretrained Models
TLDR
This work develops a method for generating commonsense knowledge using a large, pre-trained bidirectional language model that can be used to rank a triple’s validity by the estimated pointwise mutual information between the two entities.
...
...