Order-n correction for regular languages

@article{Wagner1974OrdernCF,
  title={Order-n correction for regular languages},
  author={Robert A. Wagner},
  journal={Commun. ACM},
  year={1974},
  volume={17},
  pages={265-268}
}
  • R. Wagner
  • Published 1974
  • Mathematics, Computer Science
  • Commun. ACM
A method is presented for calculating a string <italic>B</italic>, belonging to a given regular language <italic>L</italic>, which is “nearest” (in number of edit operations) to a given input string <italic>α</italic>. <italic>B</italic> is viewed as a reasonable “correction” for the possibly erroneous string <italic>α</italic>, where <italic>α</italic> was originally intended to be a string of <italic>L</italic>. The calculation of <italic>B</italic> by the method presented requires time… Expand
A bibliography on syntax error handling in context free languages
TLDR
This bibliography grew out of a graduate seminar course conducted jointly with Fred Ires and with the part icipation of Laura Babbitt , John Morgan and Henry Worth, in the Winter of 1989. Expand
OCR Error Correction of an Inflectional Indian Language Using Morphological Parsing
This paper deals with an OCR (Optical Character Recognition) error detection and correction technique for a highly inflectional Indian language, Bangla, the second-most popular language in India andExpand
Correcting Counter-Automaton-Recognizable Languages
TLDR
Using a linear-time algorithm for solving single-origin graph shortest distance problems, it is shown how to correct a string of length n into the language accepted by a counter automaton in time proportional to $n^2 $ on a RAM with unit operation cost function. Expand
How Hard Is Computing the Edit Distance?
TLDR
This paper presents a parallel algorithm for computing the edit distance for the class of languages accepted by one-way nondeterministic auxiliary pushdown automata working in polynomial time, a class that strictly contains context?free languages. Expand
Techniques for automatically correcting words in text
TLDR
Research aimed at correcting words in text has focused on three progressively more difficult problems: nonword error detection; (2) isolated-word error correction; and (3) context-dependent work correction, which surveys documented findings on spelling error patterns. Expand
Automatic error recovery for LR parsers in theory and practice
TLDR
The need for good syntax error handling schemes in language translation systems such as compilers, and for the automatic incorporation of such schemes into parser-generators, is argued. Expand
FarsiSpell: A spell-checking system for Persian using a large monolingual corpus
TLDR
It has been tried to demonstrate the effectiveness of a large monolingual corpus of Persian in improving the output quality of a spell-checker developed for this language. Expand
An effective algorithm for string correction using generalized edit distance - II. Computational complexity of the algorithm and some applications
TLDR
This paper deals with the problem of estimating an unknown transmitted string Xs belonging to a finite dictionary H from its observable noisy version Y, and develops an algorithm to find the string X+H which minimizes the generalized Levenshtein distance D(XY). Expand
Data-driven spell checking: The synergy of two algorithms for spelling error detection and correction
TLDR
This research attempts to improve the quality of Subasa, an existing n-gram based data driven spell checker using minimum edit distance techniques and to make the system freely available online. Expand
Computing the edit distance of a regular language
The edit distance (or Levenshtein distance) between two words is the smallest number of substitutions, insertions, and deletions of symbols that can be used to transform one of the words into theExpand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 17 REFERENCES
An error-correcting parse algorithm
TLDR
It is the author's opinion that those algorithms which do the best job of error recovery are those which are restricted to simpler forms of formal languages. Expand
The String-to-String Correction Problem
TLDR
An algorithm is presented which solves the string-to-string correction problem in time proportional to the product of the lengths of the two strings. Expand
Algorithm 97: Shortest path
TLDR
The procedure was originally programmed in FORTRAN for the Control Data 160 desk-size computer and was limited to te t ra t ion because subroutine recursiveness in CONTROL Data 160 FORTRan has been held down to four levels in the interests of economy. Expand
PL/C: the design of a high-performance compiler for PL/I
TLDR
A general purpose production compiler faces many diverse and demanding tasks and by yielding on some of these requirements, and by sacrificing generality for efficiency for a particular class of program or user, improved compiler performance should be obtainable. Expand
CORC—the Cornell computing language
CORC is an experimental computing language that was developed at Cornell University to serve the needs of a large and increasing group of computer users whose demands are both limited andExpand
Spelling correction in systems programs
TLDR
By using systems which perform spelling correction, the number of debugging runs per program has been decreased, saving both programmer and machine time. Expand
Compiler Construction for Digital Computers
TLDR
The techniques involved in writing compilers for high-level languages such as FORTRAN or PL/1, as well as semantic routines, are described. Expand
PL/C--A high performance compiler for PL/I
  • Proc. 1971 SJCC,
  • 1971
An n 3 minimum edit distance correction algorithm for context free languages
  • Tech . Rep . , Systems and Information Science Dep
  • 1972
Compiler Construction Jbr Digital Computers
  • 1971
...
1
2
...