Learn More
This paper describes the research eeorts of the \Hidden Speaking Mode" group participating in the 1996 summer workshop on speech recognition. The goal of this project is to model pronunciation variations that occur in conversational speech in general and, more speciically, to investigate the use of a hidden speaking mode to represent systematic variations(More)
We first illustrate a use of hand-labelled phonetic transcriptions of a portion of the Switchboard corpus, in conjunction with statistical techniques, to learn alternatives to canonical pronunciations of words. We then describe the use of these alternate pronunciations in a recognition experiment as well as in the acoustic training of an automatic speech(More)
We present a simple preordering approach for machine translation based on a feature-rich logistic regression model to predict whether two children of the same node in the source-side parse tree should be swapped or not. Given the pair-wise children regression scores we conduct an efficient depth-first branch-and-bound search through the space of possible(More)
We propose the use of neural networks to model source-side preordering for faster and better statistical machine translation. The neu-ral network trains a logistic regression model to predict whether two sibling nodes of the source-side parse tree should be swapped in order to obtain a more monotonic parallel corpus, based on samples extracted from the(More)
The relative contributions of item concreteness and interitem spatial organization to recall processes were studied by attempting to induce modality-specific interference between recall and response. Separate groups of 12 Ss learned lists of items that varied in physical or referential visual characteristics. They later signaled information about them(More)
We investigate the use of hierarchical phrase-based SMT lattices in end-to-end neural machine translation (NMT). Weight pushing transforms the Hiero scores for complete translation hypotheses, with the full translation grammar score and full n-gram language model score, into posteriors compatible with NMT predictive probabilities. With a slightly modified(More)