Machine Learning for Adaptive Spoken Control in PDA Applications

Abstract

A machine learning approach to interpreting utterances in spoken interfaces is described, where evidence from the utterance and from the dialogue context is combined to estimate a probability distribution over interpretations. The algorithm for the utterance evidence uses nearest-neighbour classification on a set of training examples, while the contextual evidence is provided by dialogue act n-grams derived from dialogue corpora. Each algorithm can adapt by recording data from the user at hand. Experimental results for the utterance interpreter show that adaptation to a particular user’s training utterances significantly improves recognition accuracy over training on utterances from the general population.

1 Figure or Table

Cite this paper

@inproceedings{McEleney2003MachineLF, title={Machine Learning for Adaptive Spoken Control in PDA Applications}, author={Bryan McEleney and Gregory O’Hare}, year={2003} }