Learn More
We propose the " advanced " n-grams as a new technique for simulating user behaviour in spoken dialogue systems, and we compare it with two methods used in our prior work, i.e. linear feature combination and " normal " n-grams. All methods operate on the intention level and can incorporate speech recognition and understanding errors. In the linear feature(More)
This paper describes and compares two methods for simulating user behaviour in spoken dialogue systems. User simulations are important for automatic dialogue strategy learning and the evaluation of competing strategies. Our methods are designed for use with " Information State Update " (ISU)-based dialogue systems. The first method is based on supervised(More)
The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The(More)
This paper presents a probabilistic method to simulate task-oriented human-computer dialogues at the intention level, that may be used to improve or to evaluate the performance of spoken dialogue systems. Our method uses a network of Hidden Markov Models (HMMs) to predict system and user intentions, where a " language model " predicts sequences of goals and(More)
We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios , developed at Edinburgh University and Cambridge University for the TALK project 1. This prototype is the first " Information State Update " (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification(More)
This paper presents a generic dialogue state tracker that maintains beliefs over user goals based on a few simple domain-independent rules, using basic probability operations. The rules apply to observed system actions and partially observable user acts, without using any knowledge obtained from external resources (i.e. without requiring training data). The(More)
We use machine learners trained on a combination of acoustic confidence and pragmatic plausi-bility features computed from dialogue context to predict the accuracy of incoming n-best recognition hypotheses to a spoken dialogue system. Our best results show a 25% weighted f-score improvement over a baseline system that implements a " grammar-switching "(More)
We propose a method for learning dialogue management policies from a fixed dataset. The method is designed for use with " Information State Update " (ISU)-based dialogue systems, which represent the state of a dialogue as a large set of features , resulting in a very large state space and a very large policy space. To address the problem that any fixed(More)
We propose a method for learning dialogue management policies from a fixed data set. The method addresses the challenges posed by Information State Update (ISU)-based dialogue systems, which represent the state of a dialogue as a large set of features, resulting in a very large state space and a huge policy space. To address the problem that any fixed data(More)
We present an interdisciplinary methodology for designing interactive multi-modal technology for young children with autism spectrum disorders (ASDs). In line with many other researchers in the field, we believe that the key to developing technology in this context is to embrace perspectives from diverse disciplines to arrive at a methodology that delivers(More)