Probing Neural Network Comprehension of Natural Language Arguments

@article{Niven2019ProbingNN,
  title={Probing Neural Network Comprehension of Natural Language Arguments},
  author={Timothy Niven and Hung-Yu Kao},
  journal={ArXiv},
  year={2019},
  volume={abs/1907.07355}
}
  • Timothy Niven, Hung-Yu Kao
  • Published 2019
  • Computer Science
  • ArXiv
  • We are surprised to find that BERT's peak performance of 77% on the Argument Reasoning Comprehension Task reaches just three points below the average untrained human baseline. However, we show that this result is entirely accounted for by exploitation of spurious statistical cues in the dataset. We analyze the nature of these cues and demonstrate that a range of models all exploit them. This analysis informs the construction of an adversarial dataset on which all models achieve random accuracy… CONTINUE READING
    Probing Linguistic Systematicity
    1
    Neural Conversational QA: Learning to Reason v.s. Exploiting Patterns
    4
    On the Transferability of Minimal Prediction Preserving Inputs in Question Answering
    Towards Debiasing Fact Verification Models
    19
    Simple but effective techniques to reduce biases
    12

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 31 REFERENCES
    NLITrans at SemEval-2018 Task 12: Transfer of Semantic Knowledge for Argument Comprehension
    2
    SemEval-2018 Task 12: The Argument Reasoning Comprehension Task
    15
    BLCU_NLP at SemEval-2018 Task 12: An Ensemble Model for Argument Reasoning Based on Hierarchical Attention
    2
    Hypothesis Only Baselines in Natural Language Inference
    152
    GIST at SemEval-2018 Task 12: A network transferring inference knowledge to Argument Reasoning Comprehension task
    7
    Behavior Analysis of NLI Models: Uncovering the Influence of Three Factors on Robustness
    11
    The Argument Reasoning Comprehension Task: Identification and Reconstruction of Implicit Warrants
    52