BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge

@article{Da2019BIGMR,
  title={BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge},
  author={Jeff Da},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.07713}
}
  • Jeff Da
  • Published 2019
  • Computer Science
  • ArXiv
We introduce a simple yet effective method of integrating contextual embeddings with commonsense graph embeddings, dubbed BERT Infused Graphs: Matching Over Other embeDdings. First, we introduce a preprocessing method to improve the speed of querying knowledge bases. Then, we develop a method of creating knowledge embeddings from each knowledge base. We introduce a method of aligning tokens between two misaligned tokenization methods. Finally, we contribute a method of contextualizing BERT… Expand

References

SHOWING 1-10 OF 32 REFERENCES
Yuanfudao at SemEval-2018 Task 11: Three-way Attention and Relational Knowledge for Commonsense Machine Comprehension
  • 56
  • Highly Influential
  • PDF
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  • 15,742
  • Highly Influential
  • PDF
Deep contextualized word representations
  • 5,433
  • PDF
Leveraging Knowledge Bases in LSTMs for Improving Machine Reading
  • 148
  • PDF
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
  • 931
  • Highly Influential
  • PDF
MITRE at SemEval-2018 Task 11: Commonsense Reasoning without Commonsense Knowledge
  • 16
  • PDF
Reasoning with Heterogeneous Knowledge for Commonsense Machine Comprehension
  • 36
  • PDF
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning
  • 194
  • Highly Influential
  • PDF
ERNIE: Enhanced Representation through Knowledge Integration
  • 180
  • PDF
...
1
2
3
4
...