ExpBERT: Representation Engineering with Natural Language Explanations

@inproceedings{Murty2020ExpBERTRE,
  title={ExpBERT: Representation Engineering with Natural Language Explanations},
  author={Shikhar Murty and Pang Wei Koh and Percy Liang},
  booktitle={ACL},
  year={2020}
}
  • Shikhar Murty, Pang Wei Koh, Percy Liang
  • Published in ACL 2020
  • Computer Science, Mathematics
  • Suppose we want to specify the inductive bias that married couples typically go on honeymoons for the task of extracting pairs of spouses from text. In this paper, we allow model developers to specify these types of inductive biases as natural language explanations. We use BERT fine-tuned on MultiNLI to ``interpret'' these explanations with respect to the input sentence, producing explanation-guided representations of the input. Across three relation extraction tasks, our method, ExpBERT… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    ALICE: Active Learning with Contrastive Natural Language Explanations
    Concept Bottleneck Models
    • 3
    • PDF

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 19 REFERENCES
    e-SNLI: Natural Language Inference with Natural Language Explanations
    • 57
    • PDF
    Training Classifiers with Natural Language Explanations
    • 53
    • PDF
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 10,032
    • Highly Influential
    • PDF
    Joint Concept Learning and Semantic Parsing from Natural Language Explanations
    • 45
    • Highly Influential
    • PDF
    Explain Yourself! Leveraging Language Models for Commonsense Reasoning
    • 60
    • PDF
    A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
    • 815
    • Highly Influential
    • PDF
    Learning with Latent Language
    • 29
    • PDF
    SpanBERT: Improving Pre-training by Representing and Predicting Spans
    • 202
    • Highly Influential
    • PDF