LM-Critic: Language Models for Unsupervised Grammatical Error Correction

@article{Yasunaga2021LMCriticLM,
  title={LM-Critic: Language Models for Unsupervised Grammatical Error Correction},
  author={Michihiro Yasunaga and Jure Leskovec and Percy Liang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.06822}
}
Grammatical error correction (GEC) requires a set of labeled ungrammatical / grammatical sentence pairs for training, but obtaining such annotation can be prohibitively expensive. Recently, the Break-It-Fix-It (BIFI) framework has demonstrated strong results on learning to repair a broken program without any labeled examples, but this relies on a perfect critic (e.g., a compiler) that returns whether an example is valid or not, which does not exist for the GEC task. In this work, we show how to… 
2 Citations

Figures and Tables from this paper

uChecker: Masked Pretrained Language Models as Unsupervised Chinese Spelling Checkers

TLDR
Experimental results on standard datasets demonstrate the effectiveness of the proposed model uChecker in terms of character-level and sentence-level Accuracy, Precision, Recall, and F1-Measure on tasks of spelling error detection and correction respectively.

Transformers and Graph Neural Networks for Spell Checking

TLDR
This work studies the usage of Transformers and graph neural networks for spelling error detection on sequence and word level, as well as spelling error correction, and shows that open vocabulary sequence-to-sequence Transformers can perform well for spelling correction.

References

SHOWING 1-10 OF 61 REFERENCES

Corpora Generation for Grammatical Error Correction

TLDR
It is demonstrated that neural GEC models trained using either type of corpora give similar performance, and systematic analysis is presented that compares the two approaches to data generation and highlights the effectiveness of ensembling.

Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data

TLDR
This work proposes a simple and surprisingly effective unsupervised synthetic error generation method based on confusion sets extracted from a spellchecker to increase the amount of training data.

Grammatical Error Correction in Low-Resource Scenarios

TLDR
This paper presents a new dataset AKCES-GEC on grammatical error correction for Czech, makes experiments on Czech, German and Russian and shows that when utilizing synthetic parallel corpus, Transformer neural machine translation model can reach new state-of-the-art results on these datasets.

Learning to combine Grammatical Error Corrections

TLDR
An automatic way to combine black-box systems that automatically detects the strength of a system or the combination of several systems per error type, improving precision and recall while optimizing F-score directly is proposed.

Neural Grammatical Error Correction with Finite State Transducers

TLDR
The best system developed for LM-GEC outperforms the best published result on the CoNLL-2014 test set, and achieves far better relative improvements over the SMT baselines than previous hybrid systems.

Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction

TLDR
ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rule-based framework, which facilitates error type evaluation at different levels of granularity.

Unsupervised Parsing via Constituency Tests

TLDR
An unsupervised parser is designed by specifying a set of transformations and using an unsuper supervised neural acceptability model to make grammaticality decisions, and the refined model achieves 62.8 F1 on the Penn Treebank test set, an absolute improvement of 7.6 points over the previous best published result.

Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

TLDR
This paper proposes a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence by fully pre-training a sequence to sequence model.

Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task

TLDR
The combined effects of adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models result in better than state-of-the-art neural G EC models that outperform previously best neural GEC systems.

Cross-Corpora Evaluation and Analysis of Grammatical Error Correction Models — Is Single-Corpus Evaluation Enough?

TLDR
Evaluating the performance of several GEC models, including NMT-based (LSTM, CNN, and transformer) and an SMT-based model, against various learner corpora reveals that single-corpus evaluation is insufficient for G EC models.
...