Learn More
We propose the " advanced " n-grams as a new technique for simulating user behaviour in spoken dialogue systems, and we compare it with two methods used in our prior work, i.e. linear feature combination and " normal " n-grams. All methods operate on the intention level and can incorporate speech recognition and understanding errors. In the linear feature(More)
This paper describes and compares two methods for simulating user behaviour in spoken dialogue systems. User simulations are important for automatic dialogue strategy learning and the evaluation of competing strategies. Our methods are designed for use with " Information State Update " (ISU)-based dialogue systems. The first method is based on supervised(More)
The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The(More)
We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios , developed at Edinburgh University and Cambridge University for the TALK project 1. This prototype is the first " Information State Update " (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification(More)
This paper presents a probabilistic method to simulate task-oriented human-computer dialogues at the intention level, that may be used to improve or to evaluate the performance of spoken dialogue systems. Our method uses a network of Hidden Markov Models (HMMs) to predict system and user intentions, where a " language model " predicts sequences of goals and(More)
We report evaluation results for real users of a learnt dialogue management policy versus a hand-coded policy in the TALK project's " TownInfo " tourist information system [1]. The learnt policy, for filling and confirming information slots, was derived from COMMUNICATOR (flight-booking) data using Reinforcement Learning (RL) as described in [2], ported to(More)
We use machine learners trained on a combination of acoustic confidence and pragmatic plausi-bility features computed from dialogue context to predict the accuracy of incoming n-best recognition hypotheses to a spoken dialogue system. Our best results show a 25% weighted f-score improvement over a baseline system that implements a " grammar-switching "(More)
We present and evaluate an automatic annotation system which builds " Information State Update " (ISU) representations of dialogue context for the COMMUNICATOR (2000 and 2001) corpora of human-machine dialogues (approx 2300 dialogues). The purposes of this annotation are to generate training data for reinforcement learning (RL) of dialogue policies, to(More)
We propose a method for learning dialogue management policies from a fixed data set. The method addresses the challenges posed by Information State Update (ISU)-based dialogue systems, which represent the state of a dialogue as a large set of features, resulting in a very large state space and a huge policy space. To address the problem that any fixed data(More)
This paper presents a generic dialogue state tracker that maintains beliefs over user goals based on a few simple domain-independent rules, using basic probability operations. The rules apply to observed system actions and partially observable user acts, without using any knowledge obtained from external resources (i.e. without requiring training data). The(More)