LM-Critic: Language Models for Unsupervised Grammatical Error Correction

@article{Yasunaga2021LMCriticLM,
  title={LM-Critic: Language Models for Unsupervised Grammatical Error Correction},
  author={Michihiro Yasunaga and Jure Leskovec and Percy Liang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.06822}
}
Grammatical error correction (GEC) requires a set of labeled ungrammatical / grammatical sentence pairs for training, but obtaining such annotation can be prohibitively expensive. Recently, the Break-It-Fix-It (BIFI) framework has demonstrated strong results on learning to repair a broken program without any labeled examples, but this relies on a perfect critic (e.g., a compiler) that returns whether an example is valid or not, which does not exist for the GEC task. In this work, we show how to… 

Figures and Tables from this paper

Grammatical Error Correction: A Survey of the State of the Art

The field is condense into a single article and some of the linguistic challenges of the task are outlined, the most popular datasets that are available to researchers are introduced, and the various methods and techniques that have been developed with a particular focus on artificial error generation are summarized.

SynGEC: Syntax-Enhanced Grammatical Error Correction with a Tailored GEC-Oriented Parser

This work proposes a syntax-enhanced grammatical error correction (GEC) approach named SynGEC that effectively incorporates dependency syntactic information into the encoder part of GEC models. 1 The

uChecker: Masked Pretrained Language Models as Unsupervised Chinese Spelling Checkers

Experimental results on standard datasets demonstrate the effectiveness of the proposed model uChecker in terms of character-level and sentence-level Accuracy, Precision, Recall, and F1-Measure on tasks of spelling error detection and correction respectively.

Transformers and Graph Neural Networks for Spell Checking

This work studies the usage of Transformers and graph neural networks for spelling error detection on sequence and word level, as well as spelling error correction, and shows that open vocabulary sequence-to-sequence Transformers can perform well for spelling correction.

A Simple, Yet Effective Approach to Finding Biases in Code Generation

This work shows that current code generation systems exhibit biases inherited from large language model backbones, which might leak into generated code under specific circumstances, and proposes a framework that automatically removes hints and exposes various biases that these code generation models use.

Converge to the Truth: Factual Error Correction via Iterative Constrained Editing

V EN CE proposes V EN CE, a novel method for factual error correction (FEC) with minimal edits, which formulates the FEC problem as iterative sampling editing actions with respect to a target density function.

G ENERATING S EQUENCES BY L EARNING TO [S ELF -]C ORRECT

  • Computer Science
  • 2022
Self-correction provides a flexible framework for improving the performance of off-the-shelf and fine-tuned language models on a wide range of tasks by decomposing generation into a base generator

References

SHOWING 1-10 OF 61 REFERENCES

Corpora Generation for Grammatical Error Correction

It is demonstrated that neural GEC models trained using either type of corpora give similar performance, and systematic analysis is presented that compares the two approaches to data generation and highlights the effectiveness of ensembling.

Neural Grammatical Error Correction Systems with Unsupervised Pre-training on Synthetic Data

This work proposes a simple and surprisingly effective unsupervised synthetic error generation method based on confusion sets extracted from a spellchecker to increase the amount of training data.

Grammatical Error Correction in Low-Resource Scenarios

This paper presents a new dataset AKCES-GEC on grammatical error correction for Czech, makes experiments on Czech, German and Russian and shows that when utilizing synthetic parallel corpus, Transformer neural machine translation model can reach new state-of-the-art results on these datasets.

Learning to combine Grammatical Error Corrections

An automatic way to combine black-box systems that automatically detects the strength of a system or the combination of several systems per error type, improving precision and recall while optimizing F-score directly is proposed.

Neural Grammatical Error Correction with Finite State Transducers

The best system developed for LM-GEC outperforms the best published result on the CoNLL-2014 test set, and achieves far better relative improvements over the SMT baselines than previous hybrid systems.

Automatic Annotation and Evaluation of Error Types for Grammatical Error Correction

ERRANT, a grammatical ERRor ANnotation Toolkit designed to automatically extract edits from parallel original and corrected sentences and classify them according to a new, dataset-agnostic, rule-based framework, which facilitates error type evaluation at different levels of granularity.

Unsupervised Parsing via Constituency Tests

An unsupervised parser is designed by specifying a set of transformations and using an unsuper supervised neural acceptability model to make grammaticality decisions, and the refined model achieves 62.8 F1 on the Penn Treebank test set, an absolute improvement of 7.6 points over the previous best published result.

Improving Grammatical Error Correction via Pre-Training a Copy-Augmented Architecture with Unlabeled Data

This paper proposes a copy-augmented architecture for the GEC task by copying the unchanged words from the source sentence to the target sentence by fully pre-training a sequence to sequence model.

Approaching Neural Grammatical Error Correction as a Low-Resource Machine Translation Task

The combined effects of adding source-side noise, domain-adaptation techniques, a GEC-specific training-objective, transfer learning with monolingual data, and ensembling of independently trained GEC models and language models result in better than state-of-the-art neural G EC models that outperform previously best neural GEC systems.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
...