Neural learning of approximate simple regular languages


Discrete-time recurrent neural networks (DTRNN) have been used to infer DFA from sets of examples and counterexamples; however, discrete algorithmic methods are much better at this task and clearly outperform DTRNN in space and time complexity. We show, however, how DTRNN may be used to learn not the exact language that explains the whole learning set but… (More)


2 Figures and Tables