Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge

@inproceedings{Minervini2018AdversariallyRN,
  title={Adversarially Regularising Neural NLI Models to Integrate Logical Background Knowledge},
  author={Pasquale Minervini and S. Riedel},
  booktitle={CoNLL},
  year={2018}
}
  • Pasquale Minervini, S. Riedel
  • Published in CoNLL 2018
  • Computer Science, Mathematics
  • Adversarial examples are inputs to machine learning models designed to cause the model to make a mistake. [...] Key Method Furthermore, we propose a method for adversarially regularising neural NLI models for incorporating background knowledge. Our results show that, while the proposed method does not always improve results on the SNLI and MultiNLI datasets, it significantly and consistently increases the predictive accuracy on adversarially-crafted datasets -- up to a 79.6% relative improvement -- while…Expand Abstract
    Adversarial Analysis of Natural Language Inference Systems
    2
    Adversarial Examples with Difficult Common Words for Paraphrase Identification
    1
    Adversarial Attacks on Deep-learning Models in Natural Language Processing
    30
    Generating Textual Adversarial Examples for Deep Learning Models: A Survey
    25
    Towards a Robust Deep Neural Network in Texts: A Survey
    9

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 33 REFERENCES
    Explaining and Harnessing Adversarial Examples
    5127
    Adversarial Sets for Regularising Neural Link Predictors
    50
    Annotation Artifacts in Natural Language Inference Data
    264
    Adversarial Examples for Evaluating Reading Comprehension Systems
    556
    Enhanced LSTM for Natural Language Inference
    424
    Reasoning about Entailment with Neural Attention
    558
    Adversarial Evaluation of Dialogue Models
    50
    A large annotated corpus for learning natural language inference
    1348
    Recognising Textual Entailment with Logical Inference
    237