Learn More
At the level of individual neurons, catecholamine release increases the responsivity of cells to excitatory and inhibitory inputs. A model of catecholamine effects in a network of neural-like elements is presented, which shows that (i) changes in the responsivity of individual elements do not affect their ability to detect a signal and ignore noise but (ii)(More)
We present an overview of Candide, a system for automatic translation of French text to English text. Candide uses methods of information theory and statistics to develop a probability model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate(More)
We present a maximum entropy language model that incorporates both syntax and semantics via a dependency grammar. Such a grammar expresses the relations between words by a directed graph. Because the edges of this graph may connect words that are arbitrarily far apart in a sentence, this technique can incorporate the predictive power of words that lie(More)
We describe an implementation of a simple probabilistic link grammar. This probabilistic language model extends trigrams by allowing a word to be predicted not only from the two immediately preceeding words, but potentially from any preceeding pair of adjacent words that lie within the same sentence. In this way, the trigram model can skip over less(More)
This paper describes a robust, accurate, efficient, low-resource, medium-vocabulary, grammar-based speech recognition system using Hidden Markov Models for mobile applications. Among the issues and techniques we explore are improving robustness and efficiency of the front-end, using multiple microphones for removing extraneous signals from speech via a new(More)
This paper describes IBM's large vocabulary continuous speech recognition LVCSR system used in the 1997 Hub4 English evaluation. It focusses on extensions and improvements to the system used in the 1996 evaluation. The rec-ognizer uses an additional 35 hours of training data over the one used in the 1996 Hub4 evaluation 88. It includes a number of new(More)
We report the results of investigations in acoustic modeling, language modeling and decoding techniques, for DARPA Communicator, a speaker-independent, telephone-based dialog system. By a combination of methods, including enlarging the acoustic model, augmenting the recognizer vocabulary, conditioning the language model upon dialog state, and applying a(More)
In this paper we study the gain, a naturally-arising statistic from the theory of memd modeling 2], as a gure of merit for selecting features for an memd language model. We compare the gain with two popular alternatives|empirical activation and mutual information|and argue that the gain is the preferred statistic, on the grounds that it directly measures a(More)