Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models

@article{Ye2019AlignMA,
  title={Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models},
  author={Zhi-Xiu Ye and Qian Chen and Wen Wang and Zhen-Hua Ling},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.06725}
}
Neural language representation models such as Bidirectional Encoder Representations from Transformers (BERT) pre-trained on large-scale corpora can well capture rich semantics from plain text, and can be fine-tuned to consistently improve the performance on various natural language processing (NLP) tasks. However, the existing pre-trained language representation models rarely consider explicitly incorporating commonsense knowledge or other knowledge. In this paper, we develop a pre-training… CONTINUE READING

References

Publications referenced by this paper.
SHOWING 1-10 OF 29 REFERENCES

The Winograd Schema Challenge

  • AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning
  • 2011
VIEW 8 EXCERPTS
HIGHLY INFLUENTIAL