Learn More
Pollack (1991) demonstrated that second-order recurrent neural networks can act as dynamical recognizers for formal languages when trained on positive and negative examples, and observed both phase transitions in learning and IFS-like fractal state sets. Follow-on work focused mainly on the extraction and minimization of a nite state automaton (FSA) from(More)
Recurrent neural network processing of regular language is reasonably well understood. Recent work has examined the less familiar question of context-free languages. Previous results regarding the language a n b n suggest that while it is possible for a small recurrent network to process context-free languages, learning them is difficult. This paper(More)
In recent years it has been shown that first order recurrent neural networks trained by gradient-descent can learn not only regular but also simple context-free and context-sensitive languages. However, the success rate was generally low and severe instability issues were encountered. The present study examines the hypothesis that a combination of(More)
We examined the interrelations of outcome, time elapsed during cardiopulmonary resuscitation (CPR), and blood glucose levels drawn from 83 patients with out-of-hospital cardiac arrest. Levels rose significantly during CPR. Although slope and intercept of regression lines differed for those dying in the field and those admitted, regression lines were similar(More)
Recent work by Siegelmann has shown that the computational power of neural networks matches that of Turing Machines. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction(More)
Although TD-Gammon is one of the major successes in machine learning , it has not led to similar impressive breakthroughs in temporal difference learning for other applications or even other games. We were able to replicate some of the success of TD-Gammon, developing a competitive evaluation function on a 4000 parameter feed-forward neu-ral network,(More)