Corpus ID: 202712683

Language models and Automated Essay Scoring

  title={Language models and Automated Essay Scoring},
  author={P. Rodriguez and A. Jafari and Christopher M. Ormerod},
  • P. Rodriguez, A. Jafari, Christopher M. Ormerod
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • In this paper, we present a new comparative study on automatic essay scoring (AES. [...] Key Method We elucidate the network architectures of BERT and XLNet using clear notation and diagrams and explain the advantages of transformer architectures over traditional recurrent neural network architectures. Linear algebra notation is used to clarify the functions of transformers and attention mechanisms. We compare the results with more traditional methods, such as bag of words (BOW) and long short term memory…Expand Abstract
    5 Citations
    Should You Fine-Tune BERT for Automated Essay Scoring?
    • 3
    • PDF
    Neural Automated Essay Scoring Incorporating Handcrafted Features
    • 1
    • PDF
    Robust Neural Automated Essay Scoring Using Item Response Theory
    • 1
    • PDF
    Multi-Stage Pre-training for Automated Chinese Essay Scoring
    • PDF


    A Neural Approach to Automated Essay Scoring
    • 169
    • PDF
    Automatic Text Scoring Using Neural Networks
    • 117
    • PDF
    Deep contextualized word representations
    • 5,219
    • PDF
    Language Models are Unsupervised Multitask Learners
    • 2,462
    • PDF
    How to Fine-Tune BERT for Text Classification?
    • 196
    • PDF
    Universal Language Model Fine-tuning for Text Classification
    • 1,408
    • PDF
    Effective Feature Integration for Automated Short Answer Scoring
    • 32
    • PDF
    Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
    • 865
    • PDF
    XLNet: Generalized Autoregressive Pretraining for Language Understanding
    • 2,040
    • Highly Influential
    • PDF
    Efficient Estimation of Word Representations in Vector Space
    • 16,738
    • PDF